00:00:00.001 Started by upstream project "autotest-per-patch" build number 127119 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.098 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.098 The recommended git tool is: git 00:00:00.098 using credential 00000000-0000-0000-0000-000000000002 00:00:00.100 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.132 Fetching changes from the remote Git repository 00:00:00.135 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.175 Using shallow fetch with depth 1 00:00:00.175 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.175 > git --version # timeout=10 00:00:00.202 > git --version # 'git version 2.39.2' 00:00:00.202 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.222 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.222 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.213 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.225 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.238 Checking out Revision f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 (FETCH_HEAD) 00:00:04.238 > git config core.sparsecheckout # timeout=10 00:00:04.249 > git read-tree -mu HEAD # timeout=10 00:00:04.267 > git checkout -f f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=5 00:00:04.311 Commit message: "spdk-abi-per-patch: fix check-so-deps-docker-autotest parameters" 00:00:04.311 > git rev-list --no-walk f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08 # timeout=10 00:00:04.418 [Pipeline] Start of Pipeline 00:00:04.433 [Pipeline] library 00:00:04.435 Loading library shm_lib@master 00:00:07.647 Library shm_lib@master is cached. Copying from home. 00:00:07.682 [Pipeline] node 00:00:07.803 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.807 [Pipeline] { 00:00:07.831 [Pipeline] catchError 00:00:07.834 [Pipeline] { 00:00:07.857 [Pipeline] wrap 00:00:07.871 [Pipeline] { 00:00:07.883 [Pipeline] stage 00:00:07.886 [Pipeline] { (Prologue) 00:00:08.107 [Pipeline] sh 00:00:08.382 + logger -p user.info -t JENKINS-CI 00:00:08.395 [Pipeline] echo 00:00:08.396 Node: GP11 00:00:08.404 [Pipeline] sh 00:00:08.695 [Pipeline] setCustomBuildProperty 00:00:08.706 [Pipeline] echo 00:00:08.707 Cleanup processes 00:00:08.712 [Pipeline] sh 00:00:08.991 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.991 3163570 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.004 [Pipeline] sh 00:00:09.284 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.284 ++ awk '{print $1}' 00:00:09.284 ++ grep -v 'sudo pgrep' 00:00:09.284 + sudo kill -9 00:00:09.284 + true 00:00:09.301 [Pipeline] cleanWs 00:00:09.310 [WS-CLEANUP] Deleting project workspace... 00:00:09.310 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.317 [WS-CLEANUP] done 00:00:09.320 [Pipeline] setCustomBuildProperty 00:00:09.331 [Pipeline] sh 00:00:09.609 + sudo git config --global --replace-all safe.directory '*' 00:00:09.694 [Pipeline] httpRequest 00:00:09.712 [Pipeline] echo 00:00:09.713 Sorcerer 10.211.164.101 is alive 00:00:09.719 [Pipeline] httpRequest 00:00:09.722 HttpMethod: GET 00:00:09.723 URL: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.724 Sending request to url: http://10.211.164.101/packages/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:09.725 Response Code: HTTP/1.1 200 OK 00:00:09.726 Success: Status code 200 is in the accepted range: 200,404 00:00:09.726 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:10.970 [Pipeline] sh 00:00:11.255 + tar --no-same-owner -xf jbp_f0c44d8f8e3d61ecd9e3e442b9b5901b0cc7ca08.tar.gz 00:00:11.273 [Pipeline] httpRequest 00:00:11.289 [Pipeline] echo 00:00:11.290 Sorcerer 10.211.164.101 is alive 00:00:11.297 [Pipeline] httpRequest 00:00:11.301 HttpMethod: GET 00:00:11.302 URL: http://10.211.164.101/packages/spdk_a1abc21f8ceb2cc7dcfb29ac1464fd35d8925ae7.tar.gz 00:00:11.303 Sending request to url: http://10.211.164.101/packages/spdk_a1abc21f8ceb2cc7dcfb29ac1464fd35d8925ae7.tar.gz 00:00:11.311 Response Code: HTTP/1.1 200 OK 00:00:11.311 Success: Status code 200 is in the accepted range: 200,404 00:00:11.312 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_a1abc21f8ceb2cc7dcfb29ac1464fd35d8925ae7.tar.gz 00:00:55.230 [Pipeline] sh 00:00:55.511 + tar --no-same-owner -xf spdk_a1abc21f8ceb2cc7dcfb29ac1464fd35d8925ae7.tar.gz 00:00:58.056 [Pipeline] sh 00:00:58.338 + git -C spdk log --oneline -n5 00:00:58.338 a1abc21f8 autopackage: Replace SPDK_TEST_RELEASE_BUILD with SPDK_TEST_PACKAGING 00:00:58.338 78cbcfdde test/scheduler: fix cpu mask for rpc governor tests 00:00:58.338 ba69d4678 event/scheduler: remove custom opts from static scheduler 00:00:58.338 79fce488b test/scheduler: test scheduling period with dynamic scheduler 00:00:58.338 673f37314 ut/nvme_pcie: allocate nvme_pcie_qpair instead of spdk_nvme_qpair 00:00:58.351 [Pipeline] } 00:00:58.368 [Pipeline] // stage 00:00:58.378 [Pipeline] stage 00:00:58.380 [Pipeline] { (Prepare) 00:00:58.396 [Pipeline] writeFile 00:00:58.411 [Pipeline] sh 00:00:58.692 + logger -p user.info -t JENKINS-CI 00:00:58.705 [Pipeline] sh 00:00:58.986 + logger -p user.info -t JENKINS-CI 00:00:59.000 [Pipeline] sh 00:00:59.279 + cat autorun-spdk.conf 00:00:59.279 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.279 SPDK_TEST_NVMF=1 00:00:59.279 SPDK_TEST_NVME_CLI=1 00:00:59.279 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.279 SPDK_TEST_NVMF_NICS=e810 00:00:59.279 SPDK_TEST_VFIOUSER=1 00:00:59.279 SPDK_RUN_UBSAN=1 00:00:59.279 NET_TYPE=phy 00:00:59.287 RUN_NIGHTLY=0 00:00:59.292 [Pipeline] readFile 00:00:59.318 [Pipeline] withEnv 00:00:59.321 [Pipeline] { 00:00:59.333 [Pipeline] sh 00:00:59.616 + set -ex 00:00:59.616 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:59.616 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:59.616 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:59.616 ++ SPDK_TEST_NVMF=1 00:00:59.616 ++ SPDK_TEST_NVME_CLI=1 00:00:59.616 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:59.616 ++ SPDK_TEST_NVMF_NICS=e810 00:00:59.616 ++ SPDK_TEST_VFIOUSER=1 00:00:59.616 ++ SPDK_RUN_UBSAN=1 00:00:59.616 ++ NET_TYPE=phy 00:00:59.616 ++ RUN_NIGHTLY=0 00:00:59.616 + case $SPDK_TEST_NVMF_NICS in 00:00:59.616 + DRIVERS=ice 00:00:59.616 + [[ tcp == \r\d\m\a ]] 00:00:59.616 + [[ -n ice ]] 00:00:59.616 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:59.616 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:59.616 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:59.616 rmmod: ERROR: Module irdma is not currently loaded 00:00:59.616 rmmod: ERROR: Module i40iw is not currently loaded 00:00:59.616 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:59.616 + true 00:00:59.616 + for D in $DRIVERS 00:00:59.616 + sudo modprobe ice 00:00:59.616 + exit 0 00:00:59.626 [Pipeline] } 00:00:59.644 [Pipeline] // withEnv 00:00:59.650 [Pipeline] } 00:00:59.668 [Pipeline] // stage 00:00:59.679 [Pipeline] catchError 00:00:59.681 [Pipeline] { 00:00:59.697 [Pipeline] timeout 00:00:59.697 Timeout set to expire in 50 min 00:00:59.699 [Pipeline] { 00:00:59.716 [Pipeline] stage 00:00:59.718 [Pipeline] { (Tests) 00:00:59.735 [Pipeline] sh 00:01:00.017 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.017 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.017 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.017 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:00.017 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:00.017 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.017 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:00.017 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.017 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:00.017 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:00.017 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:00.017 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:00.017 + source /etc/os-release 00:01:00.017 ++ NAME='Fedora Linux' 00:01:00.017 ++ VERSION='38 (Cloud Edition)' 00:01:00.017 ++ ID=fedora 00:01:00.017 ++ VERSION_ID=38 00:01:00.017 ++ VERSION_CODENAME= 00:01:00.017 ++ PLATFORM_ID=platform:f38 00:01:00.017 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:00.017 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:00.017 ++ LOGO=fedora-logo-icon 00:01:00.017 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:00.017 ++ HOME_URL=https://fedoraproject.org/ 00:01:00.017 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:00.017 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:00.017 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:00.017 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:00.017 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:00.017 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:00.017 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:00.017 ++ SUPPORT_END=2024-05-14 00:01:00.017 ++ VARIANT='Cloud Edition' 00:01:00.017 ++ VARIANT_ID=cloud 00:01:00.017 + uname -a 00:01:00.017 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:00.017 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:00.951 Hugepages 00:01:00.951 node hugesize free / total 00:01:00.951 node0 1048576kB 0 / 0 00:01:00.951 node0 2048kB 0 / 0 00:01:00.951 node1 1048576kB 0 / 0 00:01:00.951 node1 2048kB 0 / 0 00:01:00.951 00:01:00.951 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:00.951 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:00.951 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:01.210 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:01.210 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:01.210 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:01.210 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:01.210 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:01.210 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:01.210 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:01.210 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:01.210 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:01.210 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:01.210 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:01.210 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:01.210 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:01.210 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:01.210 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:01.210 + rm -f /tmp/spdk-ld-path 00:01:01.210 + source autorun-spdk.conf 00:01:01.210 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.210 ++ SPDK_TEST_NVMF=1 00:01:01.210 ++ SPDK_TEST_NVME_CLI=1 00:01:01.210 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.210 ++ SPDK_TEST_NVMF_NICS=e810 00:01:01.210 ++ SPDK_TEST_VFIOUSER=1 00:01:01.210 ++ SPDK_RUN_UBSAN=1 00:01:01.210 ++ NET_TYPE=phy 00:01:01.210 ++ RUN_NIGHTLY=0 00:01:01.210 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:01.210 + [[ -n '' ]] 00:01:01.210 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:01.210 + for M in /var/spdk/build-*-manifest.txt 00:01:01.210 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:01.210 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.210 + for M in /var/spdk/build-*-manifest.txt 00:01:01.210 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:01.210 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:01.210 ++ uname 00:01:01.210 + [[ Linux == \L\i\n\u\x ]] 00:01:01.210 + sudo dmesg -T 00:01:01.210 + sudo dmesg --clear 00:01:01.210 + dmesg_pid=3164244 00:01:01.210 + [[ Fedora Linux == FreeBSD ]] 00:01:01.210 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.210 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:01.210 + sudo dmesg -Tw 00:01:01.210 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:01.210 + [[ -x /usr/src/fio-static/fio ]] 00:01:01.210 + export FIO_BIN=/usr/src/fio-static/fio 00:01:01.210 + FIO_BIN=/usr/src/fio-static/fio 00:01:01.210 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:01.210 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:01.210 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:01.210 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.210 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:01.210 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:01.210 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.210 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:01.210 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:01.210 Test configuration: 00:01:01.210 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:01.210 SPDK_TEST_NVMF=1 00:01:01.210 SPDK_TEST_NVME_CLI=1 00:01:01.210 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:01.210 SPDK_TEST_NVMF_NICS=e810 00:01:01.210 SPDK_TEST_VFIOUSER=1 00:01:01.210 SPDK_RUN_UBSAN=1 00:01:01.210 NET_TYPE=phy 00:01:01.210 RUN_NIGHTLY=0 23:38:31 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:01.210 23:38:31 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:01.210 23:38:31 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:01.210 23:38:31 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:01.210 23:38:31 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.210 23:38:31 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.210 23:38:31 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.210 23:38:31 -- paths/export.sh@5 -- $ export PATH 00:01:01.210 23:38:31 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:01.210 23:38:31 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:01.210 23:38:31 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:01.210 23:38:31 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721857111.XXXXXX 00:01:01.210 23:38:31 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721857111.VOj9ZT 00:01:01.210 23:38:31 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:01.211 23:38:31 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:01.211 23:38:31 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:01.211 23:38:31 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:01.211 23:38:31 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:01.211 23:38:31 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:01.211 23:38:31 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:01.211 23:38:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:01.211 23:38:31 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:01.211 23:38:31 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:01.211 23:38:31 -- pm/common@17 -- $ local monitor 00:01:01.211 23:38:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.211 23:38:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.211 23:38:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.211 23:38:31 -- pm/common@21 -- $ date +%s 00:01:01.211 23:38:31 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:01.211 23:38:31 -- pm/common@21 -- $ date +%s 00:01:01.211 23:38:31 -- pm/common@25 -- $ sleep 1 00:01:01.211 23:38:31 -- pm/common@21 -- $ date +%s 00:01:01.211 23:38:31 -- pm/common@21 -- $ date +%s 00:01:01.211 23:38:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721857111 00:01:01.211 23:38:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721857111 00:01:01.211 23:38:31 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721857111 00:01:01.211 23:38:31 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721857111 00:01:01.468 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721857111_collect-vmstat.pm.log 00:01:01.468 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721857111_collect-cpu-load.pm.log 00:01:01.468 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721857111_collect-cpu-temp.pm.log 00:01:01.469 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721857111_collect-bmc-pm.bmc.pm.log 00:01:02.402 23:38:32 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:02.402 23:38:32 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:02.402 23:38:32 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:02.402 23:38:32 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:02.402 23:38:32 -- spdk/autobuild.sh@16 -- $ date -u 00:01:02.402 Wed Jul 24 09:38:32 PM UTC 2024 00:01:02.402 23:38:32 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:02.402 v24.09-pre-310-ga1abc21f8 00:01:02.402 23:38:32 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:02.402 23:38:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:02.402 23:38:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:02.402 23:38:32 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:02.402 23:38:32 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:02.402 23:38:32 -- common/autotest_common.sh@10 -- $ set +x 00:01:02.402 ************************************ 00:01:02.402 START TEST ubsan 00:01:02.402 ************************************ 00:01:02.402 23:38:32 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:02.402 using ubsan 00:01:02.402 00:01:02.402 real 0m0.000s 00:01:02.402 user 0m0.000s 00:01:02.402 sys 0m0.000s 00:01:02.402 23:38:32 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:02.402 23:38:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:02.402 ************************************ 00:01:02.402 END TEST ubsan 00:01:02.402 ************************************ 00:01:02.402 23:38:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:02.402 23:38:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:02.402 23:38:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:02.402 23:38:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:02.402 23:38:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:02.402 23:38:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:02.402 23:38:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:02.402 23:38:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:02.402 23:38:32 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:02.402 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:02.402 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:02.662 Using 'verbs' RDMA provider 00:01:13.591 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:23.560 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:23.560 Creating mk/config.mk...done. 00:01:23.560 Creating mk/cc.flags.mk...done. 00:01:23.560 Type 'make' to build. 00:01:23.560 23:38:53 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:23.560 23:38:53 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:23.560 23:38:53 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:23.560 23:38:53 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.560 ************************************ 00:01:23.560 START TEST make 00:01:23.560 ************************************ 00:01:23.560 23:38:53 make -- common/autotest_common.sh@1123 -- $ make -j48 00:01:23.560 make[1]: Nothing to be done for 'all'. 00:01:24.529 The Meson build system 00:01:24.529 Version: 1.3.1 00:01:24.529 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:24.529 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:24.529 Build type: native build 00:01:24.529 Project name: libvfio-user 00:01:24.529 Project version: 0.0.1 00:01:24.529 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:24.529 C linker for the host machine: cc ld.bfd 2.39-16 00:01:24.529 Host machine cpu family: x86_64 00:01:24.529 Host machine cpu: x86_64 00:01:24.529 Run-time dependency threads found: YES 00:01:24.529 Library dl found: YES 00:01:24.529 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:24.529 Run-time dependency json-c found: YES 0.17 00:01:24.529 Run-time dependency cmocka found: YES 1.1.7 00:01:24.529 Program pytest-3 found: NO 00:01:24.529 Program flake8 found: NO 00:01:24.529 Program misspell-fixer found: NO 00:01:24.529 Program restructuredtext-lint found: NO 00:01:24.529 Program valgrind found: YES (/usr/bin/valgrind) 00:01:24.529 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:24.529 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:24.529 Compiler for C supports arguments -Wwrite-strings: YES 00:01:24.529 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:24.529 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:24.529 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:24.529 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:24.529 Build targets in project: 8 00:01:24.529 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:24.529 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:24.529 00:01:24.529 libvfio-user 0.0.1 00:01:24.529 00:01:24.529 User defined options 00:01:24.529 buildtype : debug 00:01:24.529 default_library: shared 00:01:24.529 libdir : /usr/local/lib 00:01:24.529 00:01:24.529 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:25.105 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:25.368 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:25.368 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:25.368 [3/37] Compiling C object samples/null.p/null.c.o 00:01:25.368 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:25.368 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:25.368 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:25.368 [7/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:25.368 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:25.368 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:25.368 [10/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:25.368 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:25.368 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:25.368 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:25.368 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:25.368 [15/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:25.368 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:25.368 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:25.368 [18/37] Compiling C object samples/server.p/server.c.o 00:01:25.368 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:25.368 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:25.629 [21/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:25.629 [22/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:25.629 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:25.629 [24/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:25.629 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:25.629 [26/37] Compiling C object samples/client.p/client.c.o 00:01:25.629 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:25.629 [28/37] Linking target samples/client 00:01:25.629 [29/37] Linking target test/unit_tests 00:01:25.629 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:25.889 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:01:25.889 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:26.150 [33/37] Linking target samples/null 00:01:26.150 [34/37] Linking target samples/gpio-pci-idio-16 00:01:26.150 [35/37] Linking target samples/server 00:01:26.150 [36/37] Linking target samples/lspci 00:01:26.150 [37/37] Linking target samples/shadow_ioeventfd_server 00:01:26.150 INFO: autodetecting backend as ninja 00:01:26.150 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:26.150 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:27.091 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:27.091 ninja: no work to do. 00:01:31.280 The Meson build system 00:01:31.280 Version: 1.3.1 00:01:31.280 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:31.280 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:31.280 Build type: native build 00:01:31.280 Program cat found: YES (/usr/bin/cat) 00:01:31.280 Project name: DPDK 00:01:31.280 Project version: 24.03.0 00:01:31.280 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:31.280 C linker for the host machine: cc ld.bfd 2.39-16 00:01:31.280 Host machine cpu family: x86_64 00:01:31.280 Host machine cpu: x86_64 00:01:31.280 Message: ## Building in Developer Mode ## 00:01:31.280 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:31.280 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:31.280 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:31.280 Program python3 found: YES (/usr/bin/python3) 00:01:31.280 Program cat found: YES (/usr/bin/cat) 00:01:31.280 Compiler for C supports arguments -march=native: YES 00:01:31.280 Checking for size of "void *" : 8 00:01:31.280 Checking for size of "void *" : 8 (cached) 00:01:31.280 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:31.280 Library m found: YES 00:01:31.280 Library numa found: YES 00:01:31.280 Has header "numaif.h" : YES 00:01:31.280 Library fdt found: NO 00:01:31.280 Library execinfo found: NO 00:01:31.280 Has header "execinfo.h" : YES 00:01:31.280 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:31.280 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:31.280 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:31.280 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:31.280 Run-time dependency openssl found: YES 3.0.9 00:01:31.280 Run-time dependency libpcap found: YES 1.10.4 00:01:31.280 Has header "pcap.h" with dependency libpcap: YES 00:01:31.280 Compiler for C supports arguments -Wcast-qual: YES 00:01:31.280 Compiler for C supports arguments -Wdeprecated: YES 00:01:31.280 Compiler for C supports arguments -Wformat: YES 00:01:31.280 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:31.280 Compiler for C supports arguments -Wformat-security: NO 00:01:31.280 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:31.280 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:31.280 Compiler for C supports arguments -Wnested-externs: YES 00:01:31.280 Compiler for C supports arguments -Wold-style-definition: YES 00:01:31.280 Compiler for C supports arguments -Wpointer-arith: YES 00:01:31.280 Compiler for C supports arguments -Wsign-compare: YES 00:01:31.280 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:31.280 Compiler for C supports arguments -Wundef: YES 00:01:31.280 Compiler for C supports arguments -Wwrite-strings: YES 00:01:31.280 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:31.280 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:31.280 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:31.280 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:31.280 Program objdump found: YES (/usr/bin/objdump) 00:01:31.280 Compiler for C supports arguments -mavx512f: YES 00:01:31.280 Checking if "AVX512 checking" compiles: YES 00:01:31.280 Fetching value of define "__SSE4_2__" : 1 00:01:31.280 Fetching value of define "__AES__" : 1 00:01:31.280 Fetching value of define "__AVX__" : 1 00:01:31.280 Fetching value of define "__AVX2__" : (undefined) 00:01:31.280 Fetching value of define "__AVX512BW__" : (undefined) 00:01:31.280 Fetching value of define "__AVX512CD__" : (undefined) 00:01:31.280 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:31.280 Fetching value of define "__AVX512F__" : (undefined) 00:01:31.280 Fetching value of define "__AVX512VL__" : (undefined) 00:01:31.280 Fetching value of define "__PCLMUL__" : 1 00:01:31.280 Fetching value of define "__RDRND__" : 1 00:01:31.280 Fetching value of define "__RDSEED__" : (undefined) 00:01:31.280 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:31.280 Fetching value of define "__znver1__" : (undefined) 00:01:31.280 Fetching value of define "__znver2__" : (undefined) 00:01:31.280 Fetching value of define "__znver3__" : (undefined) 00:01:31.280 Fetching value of define "__znver4__" : (undefined) 00:01:31.280 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:31.280 Message: lib/log: Defining dependency "log" 00:01:31.280 Message: lib/kvargs: Defining dependency "kvargs" 00:01:31.280 Message: lib/telemetry: Defining dependency "telemetry" 00:01:31.280 Checking for function "getentropy" : NO 00:01:31.280 Message: lib/eal: Defining dependency "eal" 00:01:31.280 Message: lib/ring: Defining dependency "ring" 00:01:31.280 Message: lib/rcu: Defining dependency "rcu" 00:01:31.281 Message: lib/mempool: Defining dependency "mempool" 00:01:31.281 Message: lib/mbuf: Defining dependency "mbuf" 00:01:31.281 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:31.281 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:31.281 Compiler for C supports arguments -mpclmul: YES 00:01:31.281 Compiler for C supports arguments -maes: YES 00:01:31.281 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:31.281 Compiler for C supports arguments -mavx512bw: YES 00:01:31.281 Compiler for C supports arguments -mavx512dq: YES 00:01:31.281 Compiler for C supports arguments -mavx512vl: YES 00:01:31.281 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:31.281 Compiler for C supports arguments -mavx2: YES 00:01:31.281 Compiler for C supports arguments -mavx: YES 00:01:31.281 Message: lib/net: Defining dependency "net" 00:01:31.281 Message: lib/meter: Defining dependency "meter" 00:01:31.281 Message: lib/ethdev: Defining dependency "ethdev" 00:01:31.281 Message: lib/pci: Defining dependency "pci" 00:01:31.281 Message: lib/cmdline: Defining dependency "cmdline" 00:01:31.281 Message: lib/hash: Defining dependency "hash" 00:01:31.281 Message: lib/timer: Defining dependency "timer" 00:01:31.281 Message: lib/compressdev: Defining dependency "compressdev" 00:01:31.281 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:31.281 Message: lib/dmadev: Defining dependency "dmadev" 00:01:31.281 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:31.281 Message: lib/power: Defining dependency "power" 00:01:31.281 Message: lib/reorder: Defining dependency "reorder" 00:01:31.281 Message: lib/security: Defining dependency "security" 00:01:31.281 Has header "linux/userfaultfd.h" : YES 00:01:31.281 Has header "linux/vduse.h" : YES 00:01:31.281 Message: lib/vhost: Defining dependency "vhost" 00:01:31.281 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:31.281 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:31.281 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:31.281 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:31.281 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:31.281 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:31.281 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:31.281 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:31.281 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:31.281 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:31.281 Program doxygen found: YES (/usr/bin/doxygen) 00:01:31.281 Configuring doxy-api-html.conf using configuration 00:01:31.281 Configuring doxy-api-man.conf using configuration 00:01:31.281 Program mandb found: YES (/usr/bin/mandb) 00:01:31.281 Program sphinx-build found: NO 00:01:31.281 Configuring rte_build_config.h using configuration 00:01:31.281 Message: 00:01:31.281 ================= 00:01:31.281 Applications Enabled 00:01:31.281 ================= 00:01:31.281 00:01:31.281 apps: 00:01:31.281 00:01:31.281 00:01:31.281 Message: 00:01:31.281 ================= 00:01:31.281 Libraries Enabled 00:01:31.281 ================= 00:01:31.281 00:01:31.281 libs: 00:01:31.281 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:31.281 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:31.281 cryptodev, dmadev, power, reorder, security, vhost, 00:01:31.281 00:01:31.281 Message: 00:01:31.281 =============== 00:01:31.281 Drivers Enabled 00:01:31.281 =============== 00:01:31.281 00:01:31.281 common: 00:01:31.281 00:01:31.281 bus: 00:01:31.281 pci, vdev, 00:01:31.281 mempool: 00:01:31.281 ring, 00:01:31.281 dma: 00:01:31.281 00:01:31.281 net: 00:01:31.281 00:01:31.281 crypto: 00:01:31.281 00:01:31.281 compress: 00:01:31.281 00:01:31.281 vdpa: 00:01:31.281 00:01:31.281 00:01:31.281 Message: 00:01:31.281 ================= 00:01:31.281 Content Skipped 00:01:31.281 ================= 00:01:31.281 00:01:31.281 apps: 00:01:31.281 dumpcap: explicitly disabled via build config 00:01:31.281 graph: explicitly disabled via build config 00:01:31.281 pdump: explicitly disabled via build config 00:01:31.281 proc-info: explicitly disabled via build config 00:01:31.281 test-acl: explicitly disabled via build config 00:01:31.281 test-bbdev: explicitly disabled via build config 00:01:31.281 test-cmdline: explicitly disabled via build config 00:01:31.281 test-compress-perf: explicitly disabled via build config 00:01:31.281 test-crypto-perf: explicitly disabled via build config 00:01:31.281 test-dma-perf: explicitly disabled via build config 00:01:31.281 test-eventdev: explicitly disabled via build config 00:01:31.281 test-fib: explicitly disabled via build config 00:01:31.281 test-flow-perf: explicitly disabled via build config 00:01:31.281 test-gpudev: explicitly disabled via build config 00:01:31.281 test-mldev: explicitly disabled via build config 00:01:31.281 test-pipeline: explicitly disabled via build config 00:01:31.281 test-pmd: explicitly disabled via build config 00:01:31.281 test-regex: explicitly disabled via build config 00:01:31.281 test-sad: explicitly disabled via build config 00:01:31.281 test-security-perf: explicitly disabled via build config 00:01:31.281 00:01:31.281 libs: 00:01:31.281 argparse: explicitly disabled via build config 00:01:31.281 metrics: explicitly disabled via build config 00:01:31.281 acl: explicitly disabled via build config 00:01:31.281 bbdev: explicitly disabled via build config 00:01:31.281 bitratestats: explicitly disabled via build config 00:01:31.281 bpf: explicitly disabled via build config 00:01:31.281 cfgfile: explicitly disabled via build config 00:01:31.281 distributor: explicitly disabled via build config 00:01:31.281 efd: explicitly disabled via build config 00:01:31.281 eventdev: explicitly disabled via build config 00:01:31.281 dispatcher: explicitly disabled via build config 00:01:31.281 gpudev: explicitly disabled via build config 00:01:31.281 gro: explicitly disabled via build config 00:01:31.281 gso: explicitly disabled via build config 00:01:31.281 ip_frag: explicitly disabled via build config 00:01:31.281 jobstats: explicitly disabled via build config 00:01:31.281 latencystats: explicitly disabled via build config 00:01:31.281 lpm: explicitly disabled via build config 00:01:31.281 member: explicitly disabled via build config 00:01:31.281 pcapng: explicitly disabled via build config 00:01:31.281 rawdev: explicitly disabled via build config 00:01:31.281 regexdev: explicitly disabled via build config 00:01:31.281 mldev: explicitly disabled via build config 00:01:31.281 rib: explicitly disabled via build config 00:01:31.281 sched: explicitly disabled via build config 00:01:31.281 stack: explicitly disabled via build config 00:01:31.281 ipsec: explicitly disabled via build config 00:01:31.281 pdcp: explicitly disabled via build config 00:01:31.281 fib: explicitly disabled via build config 00:01:31.281 port: explicitly disabled via build config 00:01:31.281 pdump: explicitly disabled via build config 00:01:31.281 table: explicitly disabled via build config 00:01:31.281 pipeline: explicitly disabled via build config 00:01:31.281 graph: explicitly disabled via build config 00:01:31.281 node: explicitly disabled via build config 00:01:31.281 00:01:31.281 drivers: 00:01:31.281 common/cpt: not in enabled drivers build config 00:01:31.281 common/dpaax: not in enabled drivers build config 00:01:31.281 common/iavf: not in enabled drivers build config 00:01:31.281 common/idpf: not in enabled drivers build config 00:01:31.281 common/ionic: not in enabled drivers build config 00:01:31.281 common/mvep: not in enabled drivers build config 00:01:31.281 common/octeontx: not in enabled drivers build config 00:01:31.281 bus/auxiliary: not in enabled drivers build config 00:01:31.281 bus/cdx: not in enabled drivers build config 00:01:31.281 bus/dpaa: not in enabled drivers build config 00:01:31.281 bus/fslmc: not in enabled drivers build config 00:01:31.281 bus/ifpga: not in enabled drivers build config 00:01:31.281 bus/platform: not in enabled drivers build config 00:01:31.281 bus/uacce: not in enabled drivers build config 00:01:31.281 bus/vmbus: not in enabled drivers build config 00:01:31.281 common/cnxk: not in enabled drivers build config 00:01:31.281 common/mlx5: not in enabled drivers build config 00:01:31.281 common/nfp: not in enabled drivers build config 00:01:31.281 common/nitrox: not in enabled drivers build config 00:01:31.281 common/qat: not in enabled drivers build config 00:01:31.282 common/sfc_efx: not in enabled drivers build config 00:01:31.282 mempool/bucket: not in enabled drivers build config 00:01:31.282 mempool/cnxk: not in enabled drivers build config 00:01:31.282 mempool/dpaa: not in enabled drivers build config 00:01:31.282 mempool/dpaa2: not in enabled drivers build config 00:01:31.282 mempool/octeontx: not in enabled drivers build config 00:01:31.282 mempool/stack: not in enabled drivers build config 00:01:31.282 dma/cnxk: not in enabled drivers build config 00:01:31.282 dma/dpaa: not in enabled drivers build config 00:01:31.282 dma/dpaa2: not in enabled drivers build config 00:01:31.282 dma/hisilicon: not in enabled drivers build config 00:01:31.282 dma/idxd: not in enabled drivers build config 00:01:31.282 dma/ioat: not in enabled drivers build config 00:01:31.282 dma/skeleton: not in enabled drivers build config 00:01:31.282 net/af_packet: not in enabled drivers build config 00:01:31.282 net/af_xdp: not in enabled drivers build config 00:01:31.282 net/ark: not in enabled drivers build config 00:01:31.282 net/atlantic: not in enabled drivers build config 00:01:31.282 net/avp: not in enabled drivers build config 00:01:31.282 net/axgbe: not in enabled drivers build config 00:01:31.282 net/bnx2x: not in enabled drivers build config 00:01:31.282 net/bnxt: not in enabled drivers build config 00:01:31.282 net/bonding: not in enabled drivers build config 00:01:31.282 net/cnxk: not in enabled drivers build config 00:01:31.282 net/cpfl: not in enabled drivers build config 00:01:31.282 net/cxgbe: not in enabled drivers build config 00:01:31.282 net/dpaa: not in enabled drivers build config 00:01:31.282 net/dpaa2: not in enabled drivers build config 00:01:31.282 net/e1000: not in enabled drivers build config 00:01:31.282 net/ena: not in enabled drivers build config 00:01:31.282 net/enetc: not in enabled drivers build config 00:01:31.282 net/enetfec: not in enabled drivers build config 00:01:31.282 net/enic: not in enabled drivers build config 00:01:31.282 net/failsafe: not in enabled drivers build config 00:01:31.282 net/fm10k: not in enabled drivers build config 00:01:31.282 net/gve: not in enabled drivers build config 00:01:31.282 net/hinic: not in enabled drivers build config 00:01:31.282 net/hns3: not in enabled drivers build config 00:01:31.282 net/i40e: not in enabled drivers build config 00:01:31.282 net/iavf: not in enabled drivers build config 00:01:31.282 net/ice: not in enabled drivers build config 00:01:31.282 net/idpf: not in enabled drivers build config 00:01:31.282 net/igc: not in enabled drivers build config 00:01:31.282 net/ionic: not in enabled drivers build config 00:01:31.282 net/ipn3ke: not in enabled drivers build config 00:01:31.282 net/ixgbe: not in enabled drivers build config 00:01:31.282 net/mana: not in enabled drivers build config 00:01:31.282 net/memif: not in enabled drivers build config 00:01:31.282 net/mlx4: not in enabled drivers build config 00:01:31.282 net/mlx5: not in enabled drivers build config 00:01:31.282 net/mvneta: not in enabled drivers build config 00:01:31.282 net/mvpp2: not in enabled drivers build config 00:01:31.282 net/netvsc: not in enabled drivers build config 00:01:31.282 net/nfb: not in enabled drivers build config 00:01:31.282 net/nfp: not in enabled drivers build config 00:01:31.282 net/ngbe: not in enabled drivers build config 00:01:31.282 net/null: not in enabled drivers build config 00:01:31.282 net/octeontx: not in enabled drivers build config 00:01:31.282 net/octeon_ep: not in enabled drivers build config 00:01:31.282 net/pcap: not in enabled drivers build config 00:01:31.282 net/pfe: not in enabled drivers build config 00:01:31.282 net/qede: not in enabled drivers build config 00:01:31.282 net/ring: not in enabled drivers build config 00:01:31.282 net/sfc: not in enabled drivers build config 00:01:31.282 net/softnic: not in enabled drivers build config 00:01:31.282 net/tap: not in enabled drivers build config 00:01:31.282 net/thunderx: not in enabled drivers build config 00:01:31.282 net/txgbe: not in enabled drivers build config 00:01:31.282 net/vdev_netvsc: not in enabled drivers build config 00:01:31.282 net/vhost: not in enabled drivers build config 00:01:31.282 net/virtio: not in enabled drivers build config 00:01:31.282 net/vmxnet3: not in enabled drivers build config 00:01:31.282 raw/*: missing internal dependency, "rawdev" 00:01:31.282 crypto/armv8: not in enabled drivers build config 00:01:31.282 crypto/bcmfs: not in enabled drivers build config 00:01:31.282 crypto/caam_jr: not in enabled drivers build config 00:01:31.282 crypto/ccp: not in enabled drivers build config 00:01:31.282 crypto/cnxk: not in enabled drivers build config 00:01:31.282 crypto/dpaa_sec: not in enabled drivers build config 00:01:31.282 crypto/dpaa2_sec: not in enabled drivers build config 00:01:31.282 crypto/ipsec_mb: not in enabled drivers build config 00:01:31.282 crypto/mlx5: not in enabled drivers build config 00:01:31.282 crypto/mvsam: not in enabled drivers build config 00:01:31.282 crypto/nitrox: not in enabled drivers build config 00:01:31.282 crypto/null: not in enabled drivers build config 00:01:31.282 crypto/octeontx: not in enabled drivers build config 00:01:31.282 crypto/openssl: not in enabled drivers build config 00:01:31.282 crypto/scheduler: not in enabled drivers build config 00:01:31.282 crypto/uadk: not in enabled drivers build config 00:01:31.282 crypto/virtio: not in enabled drivers build config 00:01:31.282 compress/isal: not in enabled drivers build config 00:01:31.282 compress/mlx5: not in enabled drivers build config 00:01:31.282 compress/nitrox: not in enabled drivers build config 00:01:31.282 compress/octeontx: not in enabled drivers build config 00:01:31.282 compress/zlib: not in enabled drivers build config 00:01:31.282 regex/*: missing internal dependency, "regexdev" 00:01:31.282 ml/*: missing internal dependency, "mldev" 00:01:31.282 vdpa/ifc: not in enabled drivers build config 00:01:31.282 vdpa/mlx5: not in enabled drivers build config 00:01:31.282 vdpa/nfp: not in enabled drivers build config 00:01:31.282 vdpa/sfc: not in enabled drivers build config 00:01:31.282 event/*: missing internal dependency, "eventdev" 00:01:31.282 baseband/*: missing internal dependency, "bbdev" 00:01:31.282 gpu/*: missing internal dependency, "gpudev" 00:01:31.282 00:01:31.282 00:01:31.540 Build targets in project: 85 00:01:31.540 00:01:31.540 DPDK 24.03.0 00:01:31.540 00:01:31.540 User defined options 00:01:31.540 buildtype : debug 00:01:31.540 default_library : shared 00:01:31.541 libdir : lib 00:01:31.541 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:31.541 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:31.541 c_link_args : 00:01:31.541 cpu_instruction_set: native 00:01:31.541 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:31.541 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:31.541 enable_docs : false 00:01:31.541 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:31.541 enable_kmods : false 00:01:31.541 max_lcores : 128 00:01:31.541 tests : false 00:01:31.541 00:01:31.541 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:32.117 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:32.117 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:32.117 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:32.117 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:32.117 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:32.117 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:32.117 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:32.117 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:32.117 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:32.117 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:32.117 [10/268] Linking static target lib/librte_kvargs.a 00:01:32.117 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:32.117 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:32.117 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:32.117 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:32.117 [15/268] Linking static target lib/librte_log.a 00:01:32.376 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:32.954 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.954 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:32.954 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:32.954 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:32.954 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:32.954 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:32.954 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:32.954 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:32.954 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:32.954 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:32.954 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:32.954 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:32.954 [29/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:32.954 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:32.954 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:32.954 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:32.954 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:32.954 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:32.954 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:32.954 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:32.954 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:32.954 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:32.954 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:32.954 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:32.954 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:32.954 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:32.954 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:32.954 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:32.954 [45/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:32.954 [46/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:32.954 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:33.218 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:33.218 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:33.218 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:33.218 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:33.218 [52/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:33.218 [53/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:33.218 [54/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:33.218 [55/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:33.218 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:33.218 [57/268] Linking static target lib/librte_telemetry.a 00:01:33.218 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:33.218 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:33.218 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:33.218 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:33.218 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:33.482 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:33.482 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:33.482 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.482 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:33.482 [67/268] Linking target lib/librte_log.so.24.1 00:01:33.740 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:33.740 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:33.740 [70/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:33.740 [71/268] Linking static target lib/librte_pci.a 00:01:33.740 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:33.740 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:33.740 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:33.740 [75/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:33.740 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:34.000 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:34.000 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:34.000 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:34.000 [80/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:34.000 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:34.000 [82/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:34.000 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:34.000 [84/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:34.000 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:34.000 [86/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:34.000 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:34.000 [88/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:34.000 [89/268] Linking target lib/librte_kvargs.so.24.1 00:01:34.000 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:34.000 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:34.000 [92/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:34.000 [93/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:34.000 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:34.000 [95/268] Linking static target lib/librte_meter.a 00:01:34.000 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:34.000 [97/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:34.000 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:34.000 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:34.000 [100/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:34.000 [101/268] Linking static target lib/librte_ring.a 00:01:34.000 [102/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:34.000 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:34.000 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:34.000 [105/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:34.270 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:34.270 [107/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.270 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:34.270 [109/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:34.270 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:34.270 [111/268] Linking target lib/librte_telemetry.so.24.1 00:01:34.270 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:34.270 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:34.270 [114/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:34.270 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:34.270 [116/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.270 [117/268] Linking static target lib/librte_eal.a 00:01:34.270 [118/268] Linking static target lib/librte_mempool.a 00:01:34.270 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:34.270 [120/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:34.270 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:34.270 [122/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:34.270 [123/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:34.270 [124/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:34.270 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:34.270 [126/268] Linking static target lib/librte_rcu.a 00:01:34.533 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:34.533 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:34.533 [129/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:34.534 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:34.534 [131/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:34.534 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:34.534 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:34.534 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:34.534 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.534 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:34.534 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:34.534 [138/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:34.795 [139/268] Linking static target lib/librte_net.a 00:01:34.795 [140/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:34.795 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:34.795 [142/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.795 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:34.795 [144/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:35.054 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:35.054 [146/268] Linking static target lib/librte_cmdline.a 00:01:35.054 [147/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:35.054 [148/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:35.054 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:35.054 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.054 [151/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:35.054 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:35.054 [153/268] Linking static target lib/librte_timer.a 00:01:35.054 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:35.054 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:35.054 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:35.054 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.312 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:35.313 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:35.313 [160/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:35.313 [161/268] Linking static target lib/librte_dmadev.a 00:01:35.313 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:35.313 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:35.313 [164/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:35.313 [165/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.313 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:35.313 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:35.570 [168/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:35.570 [169/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:35.570 [170/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:35.570 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:35.570 [172/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:35.570 [173/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:35.570 [174/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.570 [175/268] Linking static target lib/librte_hash.a 00:01:35.570 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:35.570 [177/268] Linking static target lib/librte_power.a 00:01:35.570 [178/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:35.570 [179/268] Linking static target lib/librte_compressdev.a 00:01:35.571 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:35.571 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:35.571 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:35.571 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:35.571 [184/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:35.571 [185/268] Linking static target lib/librte_reorder.a 00:01:35.571 [186/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:35.828 [187/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:35.828 [188/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:35.828 [189/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.828 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:35.828 [191/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:35.828 [192/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:35.828 [193/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:35.828 [194/268] Linking static target lib/librte_mbuf.a 00:01:35.828 [195/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.828 [196/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:35.828 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:35.828 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:35.828 [199/268] Linking static target lib/librte_security.a 00:01:35.828 [200/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:35.828 [201/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:35.828 [202/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:35.828 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:35.828 [204/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:36.086 [205/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.086 [206/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.086 [207/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:36.086 [208/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.086 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:36.086 [210/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.086 [211/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.086 [212/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.086 [213/268] Linking static target drivers/librte_bus_pci.a 00:01:36.086 [214/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:36.086 [215/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:36.086 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:36.086 [217/268] Linking static target drivers/librte_mempool_ring.a 00:01:36.086 [218/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.345 [219/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.345 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:36.345 [221/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.345 [222/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:36.345 [223/268] Linking static target lib/librte_ethdev.a 00:01:36.345 [224/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:36.345 [225/268] Linking static target lib/librte_cryptodev.a 00:01:36.603 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.575 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.508 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:41.035 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.035 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.035 [231/268] Linking target lib/librte_eal.so.24.1 00:01:41.035 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:41.035 [233/268] Linking target lib/librte_ring.so.24.1 00:01:41.035 [234/268] Linking target lib/librte_timer.so.24.1 00:01:41.035 [235/268] Linking target lib/librte_meter.so.24.1 00:01:41.035 [236/268] Linking target lib/librte_pci.so.24.1 00:01:41.035 [237/268] Linking target lib/librte_dmadev.so.24.1 00:01:41.035 [238/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:41.035 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:41.035 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:41.035 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:41.035 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:41.035 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:41.035 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:41.035 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:41.035 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:41.035 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:41.035 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:41.035 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:41.035 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:41.293 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:41.293 [252/268] Linking target lib/librte_net.so.24.1 00:01:41.293 [253/268] Linking target lib/librte_reorder.so.24.1 00:01:41.293 [254/268] Linking target lib/librte_compressdev.so.24.1 00:01:41.293 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:41.293 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:41.293 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:41.551 [258/268] Linking target lib/librte_cmdline.so.24.1 00:01:41.551 [259/268] Linking target lib/librte_security.so.24.1 00:01:41.551 [260/268] Linking target lib/librte_hash.so.24.1 00:01:41.551 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:41.551 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:41.551 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:41.551 [264/268] Linking target lib/librte_power.so.24.1 00:01:44.840 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:44.840 [266/268] Linking static target lib/librte_vhost.a 00:01:45.406 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.406 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:45.406 INFO: autodetecting backend as ninja 00:01:45.406 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:46.341 CC lib/ut_mock/mock.o 00:01:46.341 CC lib/ut/ut.o 00:01:46.341 CC lib/log/log.o 00:01:46.341 CC lib/log/log_flags.o 00:01:46.341 CC lib/log/log_deprecated.o 00:01:46.598 LIB libspdk_ut_mock.a 00:01:46.598 LIB libspdk_log.a 00:01:46.598 LIB libspdk_ut.a 00:01:46.598 SO libspdk_ut_mock.so.6.0 00:01:46.598 SO libspdk_log.so.7.0 00:01:46.598 SO libspdk_ut.so.2.0 00:01:46.598 SYMLINK libspdk_ut_mock.so 00:01:46.598 SYMLINK libspdk_ut.so 00:01:46.598 SYMLINK libspdk_log.so 00:01:46.856 CXX lib/trace_parser/trace.o 00:01:46.856 CC lib/dma/dma.o 00:01:46.856 CC lib/ioat/ioat.o 00:01:46.856 CC lib/util/base64.o 00:01:46.856 CC lib/util/bit_array.o 00:01:46.856 CC lib/util/cpuset.o 00:01:46.856 CC lib/util/crc16.o 00:01:46.856 CC lib/util/crc32.o 00:01:46.856 CC lib/util/crc32c.o 00:01:46.856 CC lib/util/crc32_ieee.o 00:01:46.856 CC lib/util/crc64.o 00:01:46.856 CC lib/util/dif.o 00:01:46.856 CC lib/util/fd.o 00:01:46.856 CC lib/util/fd_group.o 00:01:46.856 CC lib/util/file.o 00:01:46.856 CC lib/util/hexlify.o 00:01:46.856 CC lib/util/iov.o 00:01:46.856 CC lib/util/math.o 00:01:46.856 CC lib/util/net.o 00:01:46.856 CC lib/util/pipe.o 00:01:46.856 CC lib/util/strerror_tls.o 00:01:46.856 CC lib/util/uuid.o 00:01:46.856 CC lib/util/string.o 00:01:46.856 CC lib/util/xor.o 00:01:46.856 CC lib/util/zipf.o 00:01:46.856 CC lib/vfio_user/host/vfio_user_pci.o 00:01:46.856 CC lib/vfio_user/host/vfio_user.o 00:01:47.114 LIB libspdk_dma.a 00:01:47.114 LIB libspdk_ioat.a 00:01:47.114 SO libspdk_dma.so.4.0 00:01:47.114 SO libspdk_ioat.so.7.0 00:01:47.114 SYMLINK libspdk_dma.so 00:01:47.114 SYMLINK libspdk_ioat.so 00:01:47.114 LIB libspdk_vfio_user.a 00:01:47.114 SO libspdk_vfio_user.so.5.0 00:01:47.373 SYMLINK libspdk_vfio_user.so 00:01:47.373 LIB libspdk_util.a 00:01:47.373 SO libspdk_util.so.10.0 00:01:47.631 SYMLINK libspdk_util.so 00:01:47.631 CC lib/vmd/vmd.o 00:01:47.631 CC lib/conf/conf.o 00:01:47.631 CC lib/json/json_parse.o 00:01:47.631 CC lib/json/json_util.o 00:01:47.631 CC lib/idxd/idxd.o 00:01:47.631 CC lib/vmd/led.o 00:01:47.631 CC lib/env_dpdk/env.o 00:01:47.631 CC lib/json/json_write.o 00:01:47.631 CC lib/idxd/idxd_user.o 00:01:47.631 CC lib/env_dpdk/memory.o 00:01:47.631 CC lib/idxd/idxd_kernel.o 00:01:47.631 CC lib/env_dpdk/pci.o 00:01:47.631 CC lib/rdma_provider/common.o 00:01:47.631 CC lib/env_dpdk/init.o 00:01:47.631 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:47.631 CC lib/env_dpdk/threads.o 00:01:47.631 CC lib/rdma_utils/rdma_utils.o 00:01:47.631 CC lib/env_dpdk/pci_ioat.o 00:01:47.631 CC lib/env_dpdk/pci_virtio.o 00:01:47.631 CC lib/env_dpdk/pci_vmd.o 00:01:47.631 CC lib/env_dpdk/pci_idxd.o 00:01:47.631 CC lib/env_dpdk/pci_event.o 00:01:47.631 CC lib/env_dpdk/sigbus_handler.o 00:01:47.631 CC lib/env_dpdk/pci_dpdk.o 00:01:47.631 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:47.631 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:47.889 LIB libspdk_trace_parser.a 00:01:47.889 SO libspdk_trace_parser.so.5.0 00:01:47.889 SYMLINK libspdk_trace_parser.so 00:01:47.889 LIB libspdk_conf.a 00:01:47.889 SO libspdk_conf.so.6.0 00:01:47.889 LIB libspdk_rdma_provider.a 00:01:48.146 LIB libspdk_rdma_utils.a 00:01:48.146 LIB libspdk_json.a 00:01:48.146 SO libspdk_rdma_provider.so.6.0 00:01:48.146 SO libspdk_rdma_utils.so.1.0 00:01:48.146 SYMLINK libspdk_conf.so 00:01:48.146 SO libspdk_json.so.6.0 00:01:48.146 SYMLINK libspdk_rdma_provider.so 00:01:48.146 SYMLINK libspdk_rdma_utils.so 00:01:48.146 SYMLINK libspdk_json.so 00:01:48.405 CC lib/jsonrpc/jsonrpc_server.o 00:01:48.405 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:48.405 CC lib/jsonrpc/jsonrpc_client.o 00:01:48.405 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:48.405 LIB libspdk_idxd.a 00:01:48.405 SO libspdk_idxd.so.12.0 00:01:48.405 SYMLINK libspdk_idxd.so 00:01:48.405 LIB libspdk_vmd.a 00:01:48.405 SO libspdk_vmd.so.6.0 00:01:48.405 SYMLINK libspdk_vmd.so 00:01:48.662 LIB libspdk_jsonrpc.a 00:01:48.662 SO libspdk_jsonrpc.so.6.0 00:01:48.662 SYMLINK libspdk_jsonrpc.so 00:01:48.921 CC lib/rpc/rpc.o 00:01:48.921 LIB libspdk_rpc.a 00:01:48.921 SO libspdk_rpc.so.6.0 00:01:49.179 SYMLINK libspdk_rpc.so 00:01:49.179 CC lib/trace/trace.o 00:01:49.179 CC lib/trace/trace_flags.o 00:01:49.179 CC lib/trace/trace_rpc.o 00:01:49.179 CC lib/keyring/keyring.o 00:01:49.179 CC lib/keyring/keyring_rpc.o 00:01:49.179 CC lib/notify/notify.o 00:01:49.179 CC lib/notify/notify_rpc.o 00:01:49.437 LIB libspdk_notify.a 00:01:49.437 SO libspdk_notify.so.6.0 00:01:49.437 LIB libspdk_keyring.a 00:01:49.437 SYMLINK libspdk_notify.so 00:01:49.437 LIB libspdk_trace.a 00:01:49.437 SO libspdk_keyring.so.1.0 00:01:49.437 SO libspdk_trace.so.10.0 00:01:49.695 SYMLINK libspdk_keyring.so 00:01:49.695 SYMLINK libspdk_trace.so 00:01:49.695 CC lib/thread/thread.o 00:01:49.695 CC lib/thread/iobuf.o 00:01:49.695 CC lib/sock/sock.o 00:01:49.695 CC lib/sock/sock_rpc.o 00:01:49.954 LIB libspdk_env_dpdk.a 00:01:49.954 SO libspdk_env_dpdk.so.15.0 00:01:49.954 SYMLINK libspdk_env_dpdk.so 00:01:50.212 LIB libspdk_sock.a 00:01:50.212 SO libspdk_sock.so.10.0 00:01:50.212 SYMLINK libspdk_sock.so 00:01:50.470 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:50.470 CC lib/nvme/nvme_ctrlr.o 00:01:50.470 CC lib/nvme/nvme_fabric.o 00:01:50.470 CC lib/nvme/nvme_ns_cmd.o 00:01:50.470 CC lib/nvme/nvme_ns.o 00:01:50.470 CC lib/nvme/nvme_pcie_common.o 00:01:50.470 CC lib/nvme/nvme_pcie.o 00:01:50.470 CC lib/nvme/nvme_qpair.o 00:01:50.470 CC lib/nvme/nvme.o 00:01:50.470 CC lib/nvme/nvme_quirks.o 00:01:50.470 CC lib/nvme/nvme_transport.o 00:01:50.470 CC lib/nvme/nvme_discovery.o 00:01:50.470 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:50.470 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:50.470 CC lib/nvme/nvme_tcp.o 00:01:50.470 CC lib/nvme/nvme_opal.o 00:01:50.470 CC lib/nvme/nvme_io_msg.o 00:01:50.470 CC lib/nvme/nvme_poll_group.o 00:01:50.470 CC lib/nvme/nvme_zns.o 00:01:50.470 CC lib/nvme/nvme_stubs.o 00:01:50.470 CC lib/nvme/nvme_auth.o 00:01:50.470 CC lib/nvme/nvme_cuse.o 00:01:50.470 CC lib/nvme/nvme_vfio_user.o 00:01:50.470 CC lib/nvme/nvme_rdma.o 00:01:51.413 LIB libspdk_thread.a 00:01:51.413 SO libspdk_thread.so.10.1 00:01:51.413 SYMLINK libspdk_thread.so 00:01:51.670 CC lib/vfu_tgt/tgt_endpoint.o 00:01:51.670 CC lib/accel/accel.o 00:01:51.670 CC lib/blob/blobstore.o 00:01:51.670 CC lib/init/json_config.o 00:01:51.670 CC lib/vfu_tgt/tgt_rpc.o 00:01:51.670 CC lib/accel/accel_rpc.o 00:01:51.670 CC lib/virtio/virtio.o 00:01:51.670 CC lib/init/subsystem.o 00:01:51.670 CC lib/blob/request.o 00:01:51.670 CC lib/init/subsystem_rpc.o 00:01:51.670 CC lib/virtio/virtio_vhost_user.o 00:01:51.670 CC lib/accel/accel_sw.o 00:01:51.670 CC lib/blob/zeroes.o 00:01:51.670 CC lib/virtio/virtio_vfio_user.o 00:01:51.670 CC lib/init/rpc.o 00:01:51.670 CC lib/virtio/virtio_pci.o 00:01:51.670 CC lib/blob/blob_bs_dev.o 00:01:51.928 LIB libspdk_init.a 00:01:51.928 SO libspdk_init.so.5.0 00:01:51.928 LIB libspdk_virtio.a 00:01:51.928 LIB libspdk_vfu_tgt.a 00:01:51.928 SYMLINK libspdk_init.so 00:01:51.928 SO libspdk_vfu_tgt.so.3.0 00:01:51.928 SO libspdk_virtio.so.7.0 00:01:51.928 SYMLINK libspdk_vfu_tgt.so 00:01:51.928 SYMLINK libspdk_virtio.so 00:01:52.186 CC lib/event/app.o 00:01:52.186 CC lib/event/reactor.o 00:01:52.186 CC lib/event/log_rpc.o 00:01:52.186 CC lib/event/app_rpc.o 00:01:52.186 CC lib/event/scheduler_static.o 00:01:52.444 LIB libspdk_event.a 00:01:52.701 SO libspdk_event.so.14.0 00:01:52.701 LIB libspdk_accel.a 00:01:52.701 SO libspdk_accel.so.16.0 00:01:52.701 SYMLINK libspdk_event.so 00:01:52.701 SYMLINK libspdk_accel.so 00:01:52.701 LIB libspdk_nvme.a 00:01:52.960 CC lib/bdev/bdev.o 00:01:52.960 CC lib/bdev/bdev_rpc.o 00:01:52.960 CC lib/bdev/bdev_zone.o 00:01:52.960 CC lib/bdev/part.o 00:01:52.960 CC lib/bdev/scsi_nvme.o 00:01:52.960 SO libspdk_nvme.so.13.1 00:01:53.219 SYMLINK libspdk_nvme.so 00:01:54.590 LIB libspdk_blob.a 00:01:54.590 SO libspdk_blob.so.11.0 00:01:54.590 SYMLINK libspdk_blob.so 00:01:54.848 CC lib/blobfs/blobfs.o 00:01:54.848 CC lib/blobfs/tree.o 00:01:54.848 CC lib/lvol/lvol.o 00:01:55.414 LIB libspdk_bdev.a 00:01:55.414 SO libspdk_bdev.so.16.0 00:01:55.414 SYMLINK libspdk_bdev.so 00:01:55.713 LIB libspdk_blobfs.a 00:01:55.713 SO libspdk_blobfs.so.10.0 00:01:55.713 CC lib/scsi/dev.o 00:01:55.713 CC lib/scsi/lun.o 00:01:55.713 CC lib/nbd/nbd.o 00:01:55.713 CC lib/ftl/ftl_core.o 00:01:55.713 CC lib/nvmf/ctrlr.o 00:01:55.713 CC lib/scsi/port.o 00:01:55.713 CC lib/nbd/nbd_rpc.o 00:01:55.713 CC lib/ublk/ublk.o 00:01:55.713 CC lib/ftl/ftl_init.o 00:01:55.713 CC lib/scsi/scsi.o 00:01:55.713 CC lib/nvmf/ctrlr_discovery.o 00:01:55.713 CC lib/ftl/ftl_layout.o 00:01:55.713 CC lib/scsi/scsi_bdev.o 00:01:55.713 CC lib/nvmf/ctrlr_bdev.o 00:01:55.713 CC lib/ublk/ublk_rpc.o 00:01:55.713 CC lib/ftl/ftl_debug.o 00:01:55.713 CC lib/scsi/scsi_pr.o 00:01:55.713 CC lib/nvmf/subsystem.o 00:01:55.713 CC lib/ftl/ftl_io.o 00:01:55.713 CC lib/ftl/ftl_sb.o 00:01:55.713 CC lib/nvmf/nvmf.o 00:01:55.713 CC lib/scsi/scsi_rpc.o 00:01:55.713 CC lib/scsi/task.o 00:01:55.713 CC lib/ftl/ftl_l2p.o 00:01:55.713 CC lib/nvmf/nvmf_rpc.o 00:01:55.713 CC lib/nvmf/transport.o 00:01:55.713 CC lib/ftl/ftl_l2p_flat.o 00:01:55.713 CC lib/ftl/ftl_nv_cache.o 00:01:55.713 CC lib/ftl/ftl_band.o 00:01:55.713 CC lib/nvmf/tcp.o 00:01:55.713 CC lib/nvmf/stubs.o 00:01:55.713 CC lib/nvmf/mdns_server.o 00:01:55.713 CC lib/ftl/ftl_band_ops.o 00:01:55.713 CC lib/ftl/ftl_writer.o 00:01:55.713 CC lib/nvmf/vfio_user.o 00:01:55.713 CC lib/ftl/ftl_rq.o 00:01:55.713 CC lib/nvmf/rdma.o 00:01:55.713 CC lib/ftl/ftl_reloc.o 00:01:55.713 CC lib/nvmf/auth.o 00:01:55.713 CC lib/ftl/ftl_l2p_cache.o 00:01:55.713 CC lib/ftl/ftl_p2l.o 00:01:55.713 CC lib/ftl/mngt/ftl_mngt.o 00:01:55.713 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:55.713 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:55.713 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:55.713 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:55.713 SYMLINK libspdk_blobfs.so 00:01:55.713 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:55.713 LIB libspdk_lvol.a 00:01:56.069 SO libspdk_lvol.so.10.0 00:01:56.069 SYMLINK libspdk_lvol.so 00:01:56.069 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:56.069 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:56.069 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:56.069 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:56.069 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:56.069 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:56.069 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:56.069 CC lib/ftl/utils/ftl_conf.o 00:01:56.069 CC lib/ftl/utils/ftl_md.o 00:01:56.069 CC lib/ftl/utils/ftl_mempool.o 00:01:56.070 CC lib/ftl/utils/ftl_bitmap.o 00:01:56.070 CC lib/ftl/utils/ftl_property.o 00:01:56.070 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:56.070 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:56.070 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:56.070 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:56.070 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:56.334 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:56.334 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:01:56.334 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:56.334 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:56.334 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:56.334 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:56.334 CC lib/ftl/base/ftl_base_dev.o 00:01:56.334 CC lib/ftl/base/ftl_base_bdev.o 00:01:56.334 CC lib/ftl/ftl_trace.o 00:01:56.591 LIB libspdk_nbd.a 00:01:56.591 SO libspdk_nbd.so.7.0 00:01:56.591 LIB libspdk_scsi.a 00:01:56.591 SYMLINK libspdk_nbd.so 00:01:56.591 SO libspdk_scsi.so.9.0 00:01:56.591 LIB libspdk_ublk.a 00:01:56.849 SYMLINK libspdk_scsi.so 00:01:56.849 SO libspdk_ublk.so.3.0 00:01:56.849 SYMLINK libspdk_ublk.so 00:01:56.849 CC lib/iscsi/conn.o 00:01:56.849 CC lib/vhost/vhost.o 00:01:56.849 CC lib/vhost/vhost_rpc.o 00:01:56.849 CC lib/iscsi/init_grp.o 00:01:56.849 CC lib/vhost/vhost_scsi.o 00:01:56.849 CC lib/iscsi/iscsi.o 00:01:56.849 CC lib/vhost/vhost_blk.o 00:01:56.849 CC lib/iscsi/md5.o 00:01:56.849 CC lib/iscsi/param.o 00:01:56.849 CC lib/vhost/rte_vhost_user.o 00:01:56.849 CC lib/iscsi/portal_grp.o 00:01:56.849 CC lib/iscsi/tgt_node.o 00:01:56.849 CC lib/iscsi/iscsi_subsystem.o 00:01:56.849 CC lib/iscsi/iscsi_rpc.o 00:01:56.849 CC lib/iscsi/task.o 00:01:57.107 LIB libspdk_ftl.a 00:01:57.365 SO libspdk_ftl.so.9.0 00:01:57.623 SYMLINK libspdk_ftl.so 00:01:58.189 LIB libspdk_vhost.a 00:01:58.189 SO libspdk_vhost.so.8.0 00:01:58.189 SYMLINK libspdk_vhost.so 00:01:58.189 LIB libspdk_nvmf.a 00:01:58.447 SO libspdk_nvmf.so.19.0 00:01:58.447 LIB libspdk_iscsi.a 00:01:58.447 SO libspdk_iscsi.so.8.0 00:01:58.447 SYMLINK libspdk_nvmf.so 00:01:58.447 SYMLINK libspdk_iscsi.so 00:01:58.705 CC module/env_dpdk/env_dpdk_rpc.o 00:01:58.705 CC module/vfu_device/vfu_virtio.o 00:01:58.705 CC module/vfu_device/vfu_virtio_blk.o 00:01:58.705 CC module/vfu_device/vfu_virtio_scsi.o 00:01:58.705 CC module/vfu_device/vfu_virtio_rpc.o 00:01:58.963 CC module/accel/error/accel_error.o 00:01:58.963 CC module/blob/bdev/blob_bdev.o 00:01:58.963 CC module/accel/error/accel_error_rpc.o 00:01:58.963 CC module/keyring/linux/keyring.o 00:01:58.963 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:58.963 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:58.963 CC module/keyring/linux/keyring_rpc.o 00:01:58.963 CC module/accel/dsa/accel_dsa.o 00:01:58.963 CC module/accel/iaa/accel_iaa.o 00:01:58.963 CC module/accel/ioat/accel_ioat.o 00:01:58.963 CC module/accel/iaa/accel_iaa_rpc.o 00:01:58.963 CC module/accel/dsa/accel_dsa_rpc.o 00:01:58.963 CC module/accel/ioat/accel_ioat_rpc.o 00:01:58.963 CC module/sock/posix/posix.o 00:01:58.963 CC module/keyring/file/keyring.o 00:01:58.963 CC module/scheduler/gscheduler/gscheduler.o 00:01:58.963 CC module/keyring/file/keyring_rpc.o 00:01:58.963 LIB libspdk_env_dpdk_rpc.a 00:01:58.963 SO libspdk_env_dpdk_rpc.so.6.0 00:01:58.963 SYMLINK libspdk_env_dpdk_rpc.so 00:01:58.963 LIB libspdk_keyring_linux.a 00:01:58.963 LIB libspdk_keyring_file.a 00:01:58.963 LIB libspdk_scheduler_gscheduler.a 00:01:58.963 LIB libspdk_scheduler_dpdk_governor.a 00:01:59.222 SO libspdk_keyring_file.so.1.0 00:01:59.222 SO libspdk_keyring_linux.so.1.0 00:01:59.222 LIB libspdk_accel_error.a 00:01:59.222 SO libspdk_scheduler_gscheduler.so.4.0 00:01:59.222 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:59.222 LIB libspdk_accel_ioat.a 00:01:59.222 LIB libspdk_scheduler_dynamic.a 00:01:59.222 SO libspdk_accel_error.so.2.0 00:01:59.222 LIB libspdk_accel_iaa.a 00:01:59.222 SO libspdk_accel_ioat.so.6.0 00:01:59.222 SO libspdk_scheduler_dynamic.so.4.0 00:01:59.222 SYMLINK libspdk_keyring_file.so 00:01:59.222 SYMLINK libspdk_keyring_linux.so 00:01:59.222 SYMLINK libspdk_scheduler_gscheduler.so 00:01:59.222 SO libspdk_accel_iaa.so.3.0 00:01:59.222 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:59.222 SYMLINK libspdk_accel_error.so 00:01:59.222 LIB libspdk_blob_bdev.a 00:01:59.222 LIB libspdk_accel_dsa.a 00:01:59.222 SYMLINK libspdk_accel_ioat.so 00:01:59.222 SYMLINK libspdk_scheduler_dynamic.so 00:01:59.222 SO libspdk_blob_bdev.so.11.0 00:01:59.222 SYMLINK libspdk_accel_iaa.so 00:01:59.222 SO libspdk_accel_dsa.so.5.0 00:01:59.222 SYMLINK libspdk_blob_bdev.so 00:01:59.222 SYMLINK libspdk_accel_dsa.so 00:01:59.481 LIB libspdk_vfu_device.a 00:01:59.481 SO libspdk_vfu_device.so.3.0 00:01:59.481 CC module/bdev/delay/vbdev_delay.o 00:01:59.481 CC module/bdev/malloc/bdev_malloc.o 00:01:59.481 CC module/blobfs/bdev/blobfs_bdev.o 00:01:59.482 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:59.482 CC module/bdev/error/vbdev_error.o 00:01:59.482 CC module/bdev/lvol/vbdev_lvol.o 00:01:59.482 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:59.482 CC module/bdev/gpt/gpt.o 00:01:59.482 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:59.482 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:59.482 CC module/bdev/error/vbdev_error_rpc.o 00:01:59.482 CC module/bdev/gpt/vbdev_gpt.o 00:01:59.482 CC module/bdev/null/bdev_null.o 00:01:59.482 CC module/bdev/null/bdev_null_rpc.o 00:01:59.482 CC module/bdev/passthru/vbdev_passthru.o 00:01:59.482 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:59.482 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:59.482 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:59.482 CC module/bdev/aio/bdev_aio.o 00:01:59.482 CC module/bdev/aio/bdev_aio_rpc.o 00:01:59.482 CC module/bdev/ftl/bdev_ftl.o 00:01:59.482 CC module/bdev/split/vbdev_split.o 00:01:59.482 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:59.482 CC module/bdev/raid/bdev_raid.o 00:01:59.482 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:59.482 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:59.482 CC module/bdev/split/vbdev_split_rpc.o 00:01:59.482 CC module/bdev/raid/bdev_raid_rpc.o 00:01:59.482 CC module/bdev/nvme/bdev_nvme.o 00:01:59.482 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:59.482 CC module/bdev/raid/bdev_raid_sb.o 00:01:59.482 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:59.482 CC module/bdev/raid/raid0.o 00:01:59.482 CC module/bdev/nvme/nvme_rpc.o 00:01:59.482 CC module/bdev/raid/raid1.o 00:01:59.482 CC module/bdev/nvme/bdev_mdns_client.o 00:01:59.482 CC module/bdev/iscsi/bdev_iscsi.o 00:01:59.482 CC module/bdev/raid/concat.o 00:01:59.482 CC module/bdev/nvme/vbdev_opal.o 00:01:59.482 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:59.482 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:59.482 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:59.740 SYMLINK libspdk_vfu_device.so 00:01:59.740 LIB libspdk_sock_posix.a 00:01:59.740 SO libspdk_sock_posix.so.6.0 00:01:59.998 LIB libspdk_blobfs_bdev.a 00:01:59.998 SO libspdk_blobfs_bdev.so.6.0 00:01:59.998 SYMLINK libspdk_sock_posix.so 00:01:59.998 LIB libspdk_bdev_split.a 00:01:59.998 SYMLINK libspdk_blobfs_bdev.so 00:01:59.998 LIB libspdk_bdev_gpt.a 00:01:59.998 SO libspdk_bdev_split.so.6.0 00:01:59.998 LIB libspdk_bdev_error.a 00:01:59.998 SO libspdk_bdev_gpt.so.6.0 00:01:59.998 LIB libspdk_bdev_null.a 00:01:59.998 LIB libspdk_bdev_iscsi.a 00:01:59.998 SO libspdk_bdev_error.so.6.0 00:01:59.998 SO libspdk_bdev_null.so.6.0 00:01:59.998 LIB libspdk_bdev_passthru.a 00:01:59.998 LIB libspdk_bdev_ftl.a 00:01:59.998 SYMLINK libspdk_bdev_split.so 00:01:59.998 LIB libspdk_bdev_delay.a 00:01:59.998 SO libspdk_bdev_iscsi.so.6.0 00:01:59.998 SYMLINK libspdk_bdev_gpt.so 00:01:59.998 SO libspdk_bdev_passthru.so.6.0 00:01:59.998 SO libspdk_bdev_ftl.so.6.0 00:01:59.998 SO libspdk_bdev_delay.so.6.0 00:01:59.998 LIB libspdk_bdev_aio.a 00:01:59.998 SYMLINK libspdk_bdev_error.so 00:01:59.998 LIB libspdk_bdev_zone_block.a 00:01:59.998 SYMLINK libspdk_bdev_null.so 00:02:00.256 LIB libspdk_bdev_malloc.a 00:02:00.256 SO libspdk_bdev_aio.so.6.0 00:02:00.256 SYMLINK libspdk_bdev_iscsi.so 00:02:00.256 SO libspdk_bdev_zone_block.so.6.0 00:02:00.256 SYMLINK libspdk_bdev_passthru.so 00:02:00.256 SYMLINK libspdk_bdev_ftl.so 00:02:00.256 SYMLINK libspdk_bdev_delay.so 00:02:00.256 SO libspdk_bdev_malloc.so.6.0 00:02:00.256 SYMLINK libspdk_bdev_aio.so 00:02:00.256 SYMLINK libspdk_bdev_zone_block.so 00:02:00.256 SYMLINK libspdk_bdev_malloc.so 00:02:00.256 LIB libspdk_bdev_lvol.a 00:02:00.256 LIB libspdk_bdev_virtio.a 00:02:00.256 SO libspdk_bdev_lvol.so.6.0 00:02:00.256 SO libspdk_bdev_virtio.so.6.0 00:02:00.256 SYMLINK libspdk_bdev_lvol.so 00:02:00.514 SYMLINK libspdk_bdev_virtio.so 00:02:00.772 LIB libspdk_bdev_raid.a 00:02:00.772 SO libspdk_bdev_raid.so.6.0 00:02:00.772 SYMLINK libspdk_bdev_raid.so 00:02:02.146 LIB libspdk_bdev_nvme.a 00:02:02.146 SO libspdk_bdev_nvme.so.7.0 00:02:02.146 SYMLINK libspdk_bdev_nvme.so 00:02:02.403 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:02.403 CC module/event/subsystems/keyring/keyring.o 00:02:02.403 CC module/event/subsystems/iobuf/iobuf.o 00:02:02.403 CC module/event/subsystems/scheduler/scheduler.o 00:02:02.403 CC module/event/subsystems/vmd/vmd.o 00:02:02.403 CC module/event/subsystems/sock/sock.o 00:02:02.403 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:02.403 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:02.403 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:02.403 LIB libspdk_event_keyring.a 00:02:02.403 LIB libspdk_event_vhost_blk.a 00:02:02.403 LIB libspdk_event_vfu_tgt.a 00:02:02.403 LIB libspdk_event_scheduler.a 00:02:02.403 LIB libspdk_event_sock.a 00:02:02.662 LIB libspdk_event_vmd.a 00:02:02.662 SO libspdk_event_keyring.so.1.0 00:02:02.662 LIB libspdk_event_iobuf.a 00:02:02.662 SO libspdk_event_vhost_blk.so.3.0 00:02:02.662 SO libspdk_event_vfu_tgt.so.3.0 00:02:02.662 SO libspdk_event_scheduler.so.4.0 00:02:02.662 SO libspdk_event_sock.so.5.0 00:02:02.662 SO libspdk_event_vmd.so.6.0 00:02:02.662 SO libspdk_event_iobuf.so.3.0 00:02:02.662 SYMLINK libspdk_event_keyring.so 00:02:02.662 SYMLINK libspdk_event_vhost_blk.so 00:02:02.662 SYMLINK libspdk_event_vfu_tgt.so 00:02:02.662 SYMLINK libspdk_event_sock.so 00:02:02.662 SYMLINK libspdk_event_scheduler.so 00:02:02.662 SYMLINK libspdk_event_vmd.so 00:02:02.662 SYMLINK libspdk_event_iobuf.so 00:02:02.931 CC module/event/subsystems/accel/accel.o 00:02:02.931 LIB libspdk_event_accel.a 00:02:02.931 SO libspdk_event_accel.so.6.0 00:02:02.931 SYMLINK libspdk_event_accel.so 00:02:03.193 CC module/event/subsystems/bdev/bdev.o 00:02:03.450 LIB libspdk_event_bdev.a 00:02:03.450 SO libspdk_event_bdev.so.6.0 00:02:03.450 SYMLINK libspdk_event_bdev.so 00:02:03.708 CC module/event/subsystems/nbd/nbd.o 00:02:03.708 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:03.708 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:03.708 CC module/event/subsystems/ublk/ublk.o 00:02:03.708 CC module/event/subsystems/scsi/scsi.o 00:02:03.708 LIB libspdk_event_nbd.a 00:02:03.708 LIB libspdk_event_ublk.a 00:02:03.708 LIB libspdk_event_scsi.a 00:02:03.708 SO libspdk_event_nbd.so.6.0 00:02:03.708 SO libspdk_event_ublk.so.3.0 00:02:03.708 SO libspdk_event_scsi.so.6.0 00:02:03.967 SYMLINK libspdk_event_ublk.so 00:02:03.967 SYMLINK libspdk_event_nbd.so 00:02:03.967 LIB libspdk_event_nvmf.a 00:02:03.967 SYMLINK libspdk_event_scsi.so 00:02:03.967 SO libspdk_event_nvmf.so.6.0 00:02:03.967 SYMLINK libspdk_event_nvmf.so 00:02:03.967 CC module/event/subsystems/iscsi/iscsi.o 00:02:03.967 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:04.225 LIB libspdk_event_vhost_scsi.a 00:02:04.226 LIB libspdk_event_iscsi.a 00:02:04.226 SO libspdk_event_vhost_scsi.so.3.0 00:02:04.226 SO libspdk_event_iscsi.so.6.0 00:02:04.226 SYMLINK libspdk_event_vhost_scsi.so 00:02:04.226 SYMLINK libspdk_event_iscsi.so 00:02:04.484 SO libspdk.so.6.0 00:02:04.484 SYMLINK libspdk.so 00:02:04.484 CXX app/trace/trace.o 00:02:04.484 CC app/trace_record/trace_record.o 00:02:04.484 CC test/rpc_client/rpc_client_test.o 00:02:04.484 TEST_HEADER include/spdk/accel.h 00:02:04.484 TEST_HEADER include/spdk/accel_module.h 00:02:04.484 TEST_HEADER include/spdk/assert.h 00:02:04.484 CC app/spdk_nvme_perf/perf.o 00:02:04.484 TEST_HEADER include/spdk/barrier.h 00:02:04.484 TEST_HEADER include/spdk/base64.h 00:02:04.484 TEST_HEADER include/spdk/bdev.h 00:02:04.484 TEST_HEADER include/spdk/bdev_module.h 00:02:04.484 TEST_HEADER include/spdk/bdev_zone.h 00:02:04.484 CC app/spdk_nvme_identify/identify.o 00:02:04.484 TEST_HEADER include/spdk/bit_array.h 00:02:04.484 TEST_HEADER include/spdk/bit_pool.h 00:02:04.484 CC app/spdk_nvme_discover/discovery_aer.o 00:02:04.484 TEST_HEADER include/spdk/blob_bdev.h 00:02:04.484 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:04.484 CC app/spdk_top/spdk_top.o 00:02:04.484 TEST_HEADER include/spdk/blobfs.h 00:02:04.484 CC app/spdk_lspci/spdk_lspci.o 00:02:04.484 TEST_HEADER include/spdk/blob.h 00:02:04.484 TEST_HEADER include/spdk/conf.h 00:02:04.484 TEST_HEADER include/spdk/config.h 00:02:04.484 TEST_HEADER include/spdk/cpuset.h 00:02:04.484 TEST_HEADER include/spdk/crc16.h 00:02:04.484 TEST_HEADER include/spdk/crc32.h 00:02:04.484 TEST_HEADER include/spdk/crc64.h 00:02:04.484 TEST_HEADER include/spdk/dif.h 00:02:04.484 TEST_HEADER include/spdk/dma.h 00:02:04.484 TEST_HEADER include/spdk/endian.h 00:02:04.484 TEST_HEADER include/spdk/env_dpdk.h 00:02:04.484 TEST_HEADER include/spdk/env.h 00:02:04.484 TEST_HEADER include/spdk/event.h 00:02:04.484 TEST_HEADER include/spdk/fd_group.h 00:02:04.484 TEST_HEADER include/spdk/fd.h 00:02:04.484 TEST_HEADER include/spdk/file.h 00:02:04.484 TEST_HEADER include/spdk/ftl.h 00:02:04.484 TEST_HEADER include/spdk/gpt_spec.h 00:02:04.484 TEST_HEADER include/spdk/hexlify.h 00:02:04.484 TEST_HEADER include/spdk/histogram_data.h 00:02:04.484 TEST_HEADER include/spdk/idxd.h 00:02:04.484 TEST_HEADER include/spdk/idxd_spec.h 00:02:04.484 TEST_HEADER include/spdk/init.h 00:02:04.484 TEST_HEADER include/spdk/ioat_spec.h 00:02:04.484 TEST_HEADER include/spdk/ioat.h 00:02:04.484 TEST_HEADER include/spdk/iscsi_spec.h 00:02:04.484 TEST_HEADER include/spdk/jsonrpc.h 00:02:04.484 TEST_HEADER include/spdk/json.h 00:02:04.484 TEST_HEADER include/spdk/keyring.h 00:02:04.484 TEST_HEADER include/spdk/keyring_module.h 00:02:04.484 TEST_HEADER include/spdk/likely.h 00:02:04.484 TEST_HEADER include/spdk/log.h 00:02:04.484 TEST_HEADER include/spdk/lvol.h 00:02:04.484 TEST_HEADER include/spdk/memory.h 00:02:04.484 TEST_HEADER include/spdk/mmio.h 00:02:04.484 TEST_HEADER include/spdk/nbd.h 00:02:04.484 TEST_HEADER include/spdk/net.h 00:02:04.484 TEST_HEADER include/spdk/notify.h 00:02:04.484 TEST_HEADER include/spdk/nvme.h 00:02:04.484 TEST_HEADER include/spdk/nvme_intel.h 00:02:04.484 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:04.484 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:04.748 TEST_HEADER include/spdk/nvme_spec.h 00:02:04.748 TEST_HEADER include/spdk/nvme_zns.h 00:02:04.748 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:04.748 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:04.748 TEST_HEADER include/spdk/nvmf.h 00:02:04.749 TEST_HEADER include/spdk/nvmf_spec.h 00:02:04.749 TEST_HEADER include/spdk/nvmf_transport.h 00:02:04.749 TEST_HEADER include/spdk/opal.h 00:02:04.749 TEST_HEADER include/spdk/opal_spec.h 00:02:04.749 TEST_HEADER include/spdk/pci_ids.h 00:02:04.749 TEST_HEADER include/spdk/pipe.h 00:02:04.749 TEST_HEADER include/spdk/queue.h 00:02:04.749 TEST_HEADER include/spdk/reduce.h 00:02:04.749 TEST_HEADER include/spdk/rpc.h 00:02:04.749 TEST_HEADER include/spdk/scheduler.h 00:02:04.749 TEST_HEADER include/spdk/scsi.h 00:02:04.749 TEST_HEADER include/spdk/scsi_spec.h 00:02:04.749 TEST_HEADER include/spdk/sock.h 00:02:04.749 TEST_HEADER include/spdk/stdinc.h 00:02:04.749 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:04.749 TEST_HEADER include/spdk/string.h 00:02:04.749 TEST_HEADER include/spdk/thread.h 00:02:04.749 TEST_HEADER include/spdk/trace.h 00:02:04.749 TEST_HEADER include/spdk/trace_parser.h 00:02:04.749 TEST_HEADER include/spdk/tree.h 00:02:04.749 TEST_HEADER include/spdk/ublk.h 00:02:04.749 TEST_HEADER include/spdk/util.h 00:02:04.749 TEST_HEADER include/spdk/uuid.h 00:02:04.749 TEST_HEADER include/spdk/version.h 00:02:04.749 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:04.749 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:04.749 TEST_HEADER include/spdk/vhost.h 00:02:04.749 TEST_HEADER include/spdk/vmd.h 00:02:04.749 TEST_HEADER include/spdk/xor.h 00:02:04.749 TEST_HEADER include/spdk/zipf.h 00:02:04.749 CXX test/cpp_headers/accel.o 00:02:04.749 CXX test/cpp_headers/accel_module.o 00:02:04.749 CXX test/cpp_headers/assert.o 00:02:04.749 CXX test/cpp_headers/barrier.o 00:02:04.749 CXX test/cpp_headers/base64.o 00:02:04.749 CXX test/cpp_headers/bdev.o 00:02:04.749 CXX test/cpp_headers/bdev_module.o 00:02:04.749 CC app/spdk_dd/spdk_dd.o 00:02:04.749 CXX test/cpp_headers/bdev_zone.o 00:02:04.749 CXX test/cpp_headers/bit_array.o 00:02:04.749 CXX test/cpp_headers/bit_pool.o 00:02:04.749 CXX test/cpp_headers/blob_bdev.o 00:02:04.749 CXX test/cpp_headers/blobfs_bdev.o 00:02:04.749 CXX test/cpp_headers/blobfs.o 00:02:04.749 CXX test/cpp_headers/blob.o 00:02:04.749 CXX test/cpp_headers/conf.o 00:02:04.749 CC app/iscsi_tgt/iscsi_tgt.o 00:02:04.749 CXX test/cpp_headers/config.o 00:02:04.749 CC app/nvmf_tgt/nvmf_main.o 00:02:04.749 CXX test/cpp_headers/cpuset.o 00:02:04.749 CXX test/cpp_headers/crc16.o 00:02:04.749 CXX test/cpp_headers/crc32.o 00:02:04.749 CC test/app/jsoncat/jsoncat.o 00:02:04.749 CC app/spdk_tgt/spdk_tgt.o 00:02:04.749 CC examples/ioat/verify/verify.o 00:02:04.749 CC test/thread/poller_perf/poller_perf.o 00:02:04.749 CC examples/util/zipf/zipf.o 00:02:04.749 CC test/env/memory/memory_ut.o 00:02:04.749 CC examples/ioat/perf/perf.o 00:02:04.749 CC app/fio/nvme/fio_plugin.o 00:02:04.749 CC test/app/stub/stub.o 00:02:04.749 CC test/env/vtophys/vtophys.o 00:02:04.749 CC test/app/histogram_perf/histogram_perf.o 00:02:04.749 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:04.749 CC test/env/pci/pci_ut.o 00:02:04.749 CC test/dma/test_dma/test_dma.o 00:02:04.749 CC test/app/bdev_svc/bdev_svc.o 00:02:04.749 CC app/fio/bdev/fio_plugin.o 00:02:05.007 LINK spdk_lspci 00:02:05.007 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:05.007 CC test/env/mem_callbacks/mem_callbacks.o 00:02:05.007 LINK rpc_client_test 00:02:05.007 LINK spdk_nvme_discover 00:02:05.007 LINK jsoncat 00:02:05.007 CXX test/cpp_headers/crc64.o 00:02:05.007 LINK histogram_perf 00:02:05.007 LINK interrupt_tgt 00:02:05.007 CXX test/cpp_headers/dif.o 00:02:05.007 CXX test/cpp_headers/dma.o 00:02:05.007 LINK vtophys 00:02:05.007 LINK poller_perf 00:02:05.007 CXX test/cpp_headers/endian.o 00:02:05.007 LINK zipf 00:02:05.007 CXX test/cpp_headers/env_dpdk.o 00:02:05.007 LINK nvmf_tgt 00:02:05.007 CXX test/cpp_headers/env.o 00:02:05.007 CXX test/cpp_headers/event.o 00:02:05.007 LINK env_dpdk_post_init 00:02:05.007 CXX test/cpp_headers/fd_group.o 00:02:05.007 CXX test/cpp_headers/fd.o 00:02:05.007 CXX test/cpp_headers/file.o 00:02:05.007 CXX test/cpp_headers/ftl.o 00:02:05.007 CXX test/cpp_headers/gpt_spec.o 00:02:05.268 CXX test/cpp_headers/hexlify.o 00:02:05.268 LINK stub 00:02:05.268 LINK spdk_trace_record 00:02:05.268 LINK iscsi_tgt 00:02:05.268 CXX test/cpp_headers/histogram_data.o 00:02:05.268 LINK verify 00:02:05.268 CXX test/cpp_headers/idxd.o 00:02:05.269 LINK bdev_svc 00:02:05.269 CXX test/cpp_headers/idxd_spec.o 00:02:05.269 LINK spdk_tgt 00:02:05.269 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:05.269 LINK ioat_perf 00:02:05.269 CXX test/cpp_headers/init.o 00:02:05.269 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:05.269 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:05.269 CXX test/cpp_headers/ioat.o 00:02:05.269 CXX test/cpp_headers/ioat_spec.o 00:02:05.269 CXX test/cpp_headers/iscsi_spec.o 00:02:05.269 LINK spdk_dd 00:02:05.269 CXX test/cpp_headers/json.o 00:02:05.535 CXX test/cpp_headers/jsonrpc.o 00:02:05.535 CXX test/cpp_headers/keyring.o 00:02:05.535 CXX test/cpp_headers/keyring_module.o 00:02:05.535 CXX test/cpp_headers/likely.o 00:02:05.535 CXX test/cpp_headers/log.o 00:02:05.535 LINK spdk_trace 00:02:05.535 CXX test/cpp_headers/lvol.o 00:02:05.535 CXX test/cpp_headers/memory.o 00:02:05.535 CXX test/cpp_headers/mmio.o 00:02:05.535 CXX test/cpp_headers/nbd.o 00:02:05.535 CXX test/cpp_headers/net.o 00:02:05.535 CXX test/cpp_headers/notify.o 00:02:05.535 CXX test/cpp_headers/nvme.o 00:02:05.535 CXX test/cpp_headers/nvme_intel.o 00:02:05.535 CXX test/cpp_headers/nvme_ocssd.o 00:02:05.535 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:05.535 CXX test/cpp_headers/nvme_spec.o 00:02:05.535 CXX test/cpp_headers/nvme_zns.o 00:02:05.535 LINK pci_ut 00:02:05.535 CXX test/cpp_headers/nvmf_cmd.o 00:02:05.535 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:05.535 CXX test/cpp_headers/nvmf.o 00:02:05.535 CXX test/cpp_headers/nvmf_spec.o 00:02:05.535 CXX test/cpp_headers/nvmf_transport.o 00:02:05.535 LINK test_dma 00:02:05.535 CXX test/cpp_headers/opal.o 00:02:05.535 CXX test/cpp_headers/opal_spec.o 00:02:05.797 LINK nvme_fuzz 00:02:05.797 CC test/event/event_perf/event_perf.o 00:02:05.797 CXX test/cpp_headers/pci_ids.o 00:02:05.797 CC test/event/reactor/reactor.o 00:02:05.797 CXX test/cpp_headers/pipe.o 00:02:05.797 CXX test/cpp_headers/queue.o 00:02:05.797 CC examples/sock/hello_world/hello_sock.o 00:02:05.797 CXX test/cpp_headers/reduce.o 00:02:05.797 CC examples/vmd/lsvmd/lsvmd.o 00:02:05.797 CXX test/cpp_headers/rpc.o 00:02:05.797 CXX test/cpp_headers/scheduler.o 00:02:05.797 CC examples/idxd/perf/perf.o 00:02:05.797 CC examples/vmd/led/led.o 00:02:05.797 CXX test/cpp_headers/scsi.o 00:02:05.797 LINK spdk_nvme 00:02:05.797 LINK spdk_bdev 00:02:05.797 CC test/event/reactor_perf/reactor_perf.o 00:02:05.797 CXX test/cpp_headers/scsi_spec.o 00:02:05.797 CC examples/thread/thread/thread_ex.o 00:02:05.797 CC test/event/app_repeat/app_repeat.o 00:02:05.797 CXX test/cpp_headers/sock.o 00:02:05.797 CXX test/cpp_headers/stdinc.o 00:02:05.797 CXX test/cpp_headers/string.o 00:02:05.797 CXX test/cpp_headers/thread.o 00:02:06.059 CC test/event/scheduler/scheduler.o 00:02:06.059 CXX test/cpp_headers/trace.o 00:02:06.059 CXX test/cpp_headers/trace_parser.o 00:02:06.059 CXX test/cpp_headers/tree.o 00:02:06.059 CXX test/cpp_headers/ublk.o 00:02:06.059 CXX test/cpp_headers/util.o 00:02:06.059 CXX test/cpp_headers/uuid.o 00:02:06.059 CXX test/cpp_headers/version.o 00:02:06.059 CXX test/cpp_headers/vfio_user_pci.o 00:02:06.059 CXX test/cpp_headers/vfio_user_spec.o 00:02:06.059 CXX test/cpp_headers/vhost.o 00:02:06.059 CXX test/cpp_headers/vmd.o 00:02:06.059 CXX test/cpp_headers/xor.o 00:02:06.059 LINK reactor 00:02:06.059 LINK event_perf 00:02:06.059 CXX test/cpp_headers/zipf.o 00:02:06.059 CC app/vhost/vhost.o 00:02:06.059 LINK vhost_fuzz 00:02:06.059 LINK lsvmd 00:02:06.059 LINK spdk_nvme_perf 00:02:06.059 LINK mem_callbacks 00:02:06.059 LINK reactor_perf 00:02:06.059 LINK led 00:02:06.318 LINK spdk_top 00:02:06.318 LINK spdk_nvme_identify 00:02:06.318 LINK app_repeat 00:02:06.318 LINK hello_sock 00:02:06.318 CC test/nvme/overhead/overhead.o 00:02:06.318 CC test/nvme/reset/reset.o 00:02:06.318 CC test/nvme/simple_copy/simple_copy.o 00:02:06.318 CC test/nvme/e2edp/nvme_dp.o 00:02:06.318 CC test/nvme/err_injection/err_injection.o 00:02:06.318 CC test/nvme/reserve/reserve.o 00:02:06.318 CC test/nvme/sgl/sgl.o 00:02:06.318 CC test/nvme/boot_partition/boot_partition.o 00:02:06.318 CC test/nvme/aer/aer.o 00:02:06.318 CC test/nvme/startup/startup.o 00:02:06.318 LINK thread 00:02:06.318 CC test/nvme/connect_stress/connect_stress.o 00:02:06.318 CC test/blobfs/mkfs/mkfs.o 00:02:06.318 CC test/nvme/compliance/nvme_compliance.o 00:02:06.318 CC test/accel/dif/dif.o 00:02:06.318 CC test/nvme/fused_ordering/fused_ordering.o 00:02:06.318 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:06.318 LINK scheduler 00:02:06.318 CC test/nvme/fdp/fdp.o 00:02:06.318 LINK idxd_perf 00:02:06.318 CC test/lvol/esnap/esnap.o 00:02:06.576 CC test/nvme/cuse/cuse.o 00:02:06.576 LINK vhost 00:02:06.576 LINK boot_partition 00:02:06.576 LINK startup 00:02:06.576 LINK err_injection 00:02:06.576 LINK connect_stress 00:02:06.576 LINK doorbell_aers 00:02:06.576 LINK fused_ordering 00:02:06.576 LINK reset 00:02:06.576 LINK mkfs 00:02:06.576 LINK sgl 00:02:06.576 LINK simple_copy 00:02:06.834 LINK reserve 00:02:06.834 CC examples/nvme/abort/abort.o 00:02:06.834 CC examples/nvme/arbitration/arbitration.o 00:02:06.834 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:06.834 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:06.834 CC examples/nvme/reconnect/reconnect.o 00:02:06.834 CC examples/nvme/hello_world/hello_world.o 00:02:06.834 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:06.834 CC examples/nvme/hotplug/hotplug.o 00:02:06.834 LINK nvme_compliance 00:02:06.834 LINK fdp 00:02:06.834 LINK overhead 00:02:06.834 LINK nvme_dp 00:02:06.834 LINK aer 00:02:06.834 CC examples/accel/perf/accel_perf.o 00:02:06.834 CC examples/blob/cli/blobcli.o 00:02:06.834 LINK memory_ut 00:02:06.834 CC examples/blob/hello_world/hello_blob.o 00:02:07.093 LINK dif 00:02:07.093 LINK pmr_persistence 00:02:07.093 LINK hello_world 00:02:07.093 LINK cmb_copy 00:02:07.093 LINK hotplug 00:02:07.093 LINK arbitration 00:02:07.093 LINK reconnect 00:02:07.093 LINK hello_blob 00:02:07.351 LINK abort 00:02:07.351 LINK nvme_manage 00:02:07.351 LINK accel_perf 00:02:07.351 CC test/bdev/bdevio/bdevio.o 00:02:07.351 LINK blobcli 00:02:07.609 LINK iscsi_fuzz 00:02:07.609 CC examples/bdev/hello_world/hello_bdev.o 00:02:07.609 CC examples/bdev/bdevperf/bdevperf.o 00:02:07.867 LINK bdevio 00:02:07.867 LINK hello_bdev 00:02:07.867 LINK cuse 00:02:08.433 LINK bdevperf 00:02:08.999 CC examples/nvmf/nvmf/nvmf.o 00:02:09.257 LINK nvmf 00:02:11.788 LINK esnap 00:02:11.788 00:02:11.788 real 0m49.220s 00:02:11.788 user 10m7.902s 00:02:11.788 sys 2m26.864s 00:02:11.788 23:39:42 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:11.788 23:39:42 make -- common/autotest_common.sh@10 -- $ set +x 00:02:11.788 ************************************ 00:02:11.788 END TEST make 00:02:11.788 ************************************ 00:02:11.788 23:39:42 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:11.788 23:39:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:11.788 23:39:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:11.788 23:39:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.788 23:39:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:11.788 23:39:42 -- pm/common@44 -- $ pid=3164279 00:02:11.788 23:39:42 -- pm/common@50 -- $ kill -TERM 3164279 00:02:11.788 23:39:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.788 23:39:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:11.788 23:39:42 -- pm/common@44 -- $ pid=3164281 00:02:11.788 23:39:42 -- pm/common@50 -- $ kill -TERM 3164281 00:02:11.788 23:39:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.788 23:39:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:11.788 23:39:42 -- pm/common@44 -- $ pid=3164283 00:02:11.788 23:39:42 -- pm/common@50 -- $ kill -TERM 3164283 00:02:11.788 23:39:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:11.788 23:39:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:11.788 23:39:42 -- pm/common@44 -- $ pid=3164311 00:02:11.788 23:39:42 -- pm/common@50 -- $ sudo -E kill -TERM 3164311 00:02:11.788 23:39:42 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:11.788 23:39:42 -- nvmf/common.sh@7 -- # uname -s 00:02:11.788 23:39:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:11.788 23:39:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:11.788 23:39:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:11.788 23:39:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:11.788 23:39:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:11.788 23:39:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:11.788 23:39:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:11.788 23:39:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:11.788 23:39:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:11.788 23:39:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:11.788 23:39:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:11.788 23:39:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:11.788 23:39:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:11.788 23:39:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:11.788 23:39:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:11.788 23:39:42 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:11.788 23:39:42 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:11.788 23:39:42 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:11.788 23:39:42 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:11.788 23:39:42 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:11.788 23:39:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.788 23:39:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.788 23:39:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.788 23:39:42 -- paths/export.sh@5 -- # export PATH 00:02:11.788 23:39:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:11.788 23:39:42 -- nvmf/common.sh@47 -- # : 0 00:02:11.788 23:39:42 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:11.788 23:39:42 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:11.788 23:39:42 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:11.788 23:39:42 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:11.788 23:39:42 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:11.788 23:39:42 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:11.788 23:39:42 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:11.788 23:39:42 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:11.788 23:39:42 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:11.788 23:39:42 -- spdk/autotest.sh@32 -- # uname -s 00:02:11.788 23:39:42 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:11.788 23:39:42 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:11.788 23:39:42 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:11.788 23:39:42 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:11.788 23:39:42 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:11.788 23:39:42 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:11.788 23:39:42 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:12.048 23:39:42 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:12.048 23:39:42 -- spdk/autotest.sh@48 -- # udevadm_pid=3220382 00:02:12.048 23:39:42 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:12.048 23:39:42 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:12.048 23:39:42 -- pm/common@17 -- # local monitor 00:02:12.048 23:39:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.048 23:39:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.048 23:39:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.048 23:39:42 -- pm/common@21 -- # date +%s 00:02:12.048 23:39:42 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:12.048 23:39:42 -- pm/common@21 -- # date +%s 00:02:12.048 23:39:42 -- pm/common@25 -- # sleep 1 00:02:12.048 23:39:42 -- pm/common@21 -- # date +%s 00:02:12.048 23:39:42 -- pm/common@21 -- # date +%s 00:02:12.048 23:39:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721857182 00:02:12.048 23:39:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721857182 00:02:12.048 23:39:42 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721857182 00:02:12.048 23:39:42 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721857182 00:02:12.048 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721857182_collect-vmstat.pm.log 00:02:12.048 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721857182_collect-cpu-load.pm.log 00:02:12.048 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721857182_collect-cpu-temp.pm.log 00:02:12.048 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721857182_collect-bmc-pm.bmc.pm.log 00:02:13.013 23:39:43 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:13.013 23:39:43 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:13.013 23:39:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:13.013 23:39:43 -- common/autotest_common.sh@10 -- # set +x 00:02:13.013 23:39:43 -- spdk/autotest.sh@59 -- # create_test_list 00:02:13.013 23:39:43 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:13.013 23:39:43 -- common/autotest_common.sh@10 -- # set +x 00:02:13.013 23:39:43 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:13.013 23:39:43 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.013 23:39:43 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.013 23:39:43 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:13.013 23:39:43 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:13.013 23:39:43 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:13.013 23:39:43 -- common/autotest_common.sh@1453 -- # uname 00:02:13.013 23:39:43 -- common/autotest_common.sh@1453 -- # '[' Linux = FreeBSD ']' 00:02:13.013 23:39:43 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:13.013 23:39:43 -- common/autotest_common.sh@1473 -- # uname 00:02:13.013 23:39:43 -- common/autotest_common.sh@1473 -- # [[ Linux = FreeBSD ]] 00:02:13.013 23:39:43 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:13.013 23:39:43 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:13.013 23:39:43 -- spdk/autotest.sh@72 -- # hash lcov 00:02:13.013 23:39:43 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:13.013 23:39:43 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:13.013 --rc lcov_branch_coverage=1 00:02:13.013 --rc lcov_function_coverage=1 00:02:13.013 --rc genhtml_branch_coverage=1 00:02:13.013 --rc genhtml_function_coverage=1 00:02:13.013 --rc genhtml_legend=1 00:02:13.013 --rc geninfo_all_blocks=1 00:02:13.013 ' 00:02:13.013 23:39:43 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:13.013 --rc lcov_branch_coverage=1 00:02:13.013 --rc lcov_function_coverage=1 00:02:13.013 --rc genhtml_branch_coverage=1 00:02:13.013 --rc genhtml_function_coverage=1 00:02:13.013 --rc genhtml_legend=1 00:02:13.013 --rc geninfo_all_blocks=1 00:02:13.013 ' 00:02:13.013 23:39:43 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:13.013 --rc lcov_branch_coverage=1 00:02:13.013 --rc lcov_function_coverage=1 00:02:13.013 --rc genhtml_branch_coverage=1 00:02:13.013 --rc genhtml_function_coverage=1 00:02:13.013 --rc genhtml_legend=1 00:02:13.013 --rc geninfo_all_blocks=1 00:02:13.013 --no-external' 00:02:13.013 23:39:43 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:13.013 --rc lcov_branch_coverage=1 00:02:13.013 --rc lcov_function_coverage=1 00:02:13.013 --rc genhtml_branch_coverage=1 00:02:13.013 --rc genhtml_function_coverage=1 00:02:13.013 --rc genhtml_legend=1 00:02:13.013 --rc geninfo_all_blocks=1 00:02:13.013 --no-external' 00:02:13.013 23:39:43 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:13.013 lcov: LCOV version 1.14 00:02:13.013 23:39:43 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:14.913 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:14.913 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:14.914 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:14.914 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:14.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:14.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:14.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:14.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:14.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:14.915 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:29.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:29.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:47.857 23:40:17 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:47.857 23:40:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:47.857 23:40:17 -- common/autotest_common.sh@10 -- # set +x 00:02:47.857 23:40:17 -- spdk/autotest.sh@91 -- # rm -f 00:02:47.857 23:40:17 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:48.115 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:48.115 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:48.115 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:48.373 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:48.373 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:48.373 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:48.373 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:48.373 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:48.373 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:48.373 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:48.373 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:48.373 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:48.373 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:48.373 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:48.373 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:48.373 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:48.373 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:48.632 23:40:18 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:48.632 23:40:19 -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:02:48.632 23:40:19 -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:02:48.632 23:40:19 -- common/autotest_common.sh@1668 -- # local nvme bdf 00:02:48.632 23:40:19 -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:02:48.632 23:40:19 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:48.632 23:40:19 -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:02:48.632 23:40:19 -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:48.632 23:40:19 -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:02:48.632 23:40:19 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:48.632 23:40:19 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:48.632 23:40:19 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:48.632 23:40:19 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:48.632 23:40:19 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:48.632 23:40:19 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:48.632 No valid GPT data, bailing 00:02:48.632 23:40:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:48.632 23:40:19 -- scripts/common.sh@391 -- # pt= 00:02:48.632 23:40:19 -- scripts/common.sh@392 -- # return 1 00:02:48.632 23:40:19 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:48.632 1+0 records in 00:02:48.632 1+0 records out 00:02:48.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00253089 s, 414 MB/s 00:02:48.632 23:40:19 -- spdk/autotest.sh@118 -- # sync 00:02:48.632 23:40:19 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:48.632 23:40:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:48.632 23:40:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:50.532 23:40:20 -- spdk/autotest.sh@124 -- # uname -s 00:02:50.533 23:40:20 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:02:50.533 23:40:20 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.533 23:40:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.533 23:40:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.533 23:40:20 -- common/autotest_common.sh@10 -- # set +x 00:02:50.533 ************************************ 00:02:50.533 START TEST setup.sh 00:02:50.533 ************************************ 00:02:50.533 23:40:20 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:50.533 * Looking for test storage... 00:02:50.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:50.533 23:40:20 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:02:50.533 23:40:20 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:50.533 23:40:20 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:50.533 23:40:20 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:50.533 23:40:20 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:50.533 23:40:20 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:02:50.533 ************************************ 00:02:50.533 START TEST acl 00:02:50.533 ************************************ 00:02:50.533 23:40:20 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:50.533 * Looking for test storage... 00:02:50.533 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:50.533 23:40:21 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:02:50.533 23:40:21 setup.sh.acl -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:02:50.533 23:40:21 setup.sh.acl -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:02:50.533 23:40:21 setup.sh.acl -- common/autotest_common.sh@1668 -- # local nvme bdf 00:02:50.533 23:40:21 setup.sh.acl -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:02:50.533 23:40:21 setup.sh.acl -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:02:50.533 23:40:21 setup.sh.acl -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:02:50.533 23:40:21 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:50.533 23:40:21 setup.sh.acl -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:02:50.533 23:40:21 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:02:50.533 23:40:21 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:02:50.533 23:40:21 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:02:50.533 23:40:21 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:02:50.533 23:40:21 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:02:50.533 23:40:21 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:50.533 23:40:21 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:51.907 23:40:22 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:02:51.907 23:40:22 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:02:51.907 23:40:22 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:51.907 23:40:22 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:02:51.908 23:40:22 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.908 23:40:22 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:52.842 Hugepages 00:02:52.842 node hugesize free / total 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.842 00:02:52.842 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:52.842 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.843 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:52.843 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.843 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:52.843 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.843 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:52.843 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:52.843 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:52.843 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:52.843 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:52.843 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:53.100 23:40:23 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:02:53.100 23:40:23 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:53.100 23:40:23 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.100 23:40:23 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:53.100 ************************************ 00:02:53.100 START TEST denied 00:02:53.100 ************************************ 00:02:53.100 23:40:23 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:02:53.100 23:40:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:53.100 23:40:23 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:02:53.100 23:40:23 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:53.100 23:40:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.100 23:40:23 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:54.473 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:54.473 23:40:24 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:54.473 23:40:24 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:02:54.473 23:40:24 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:02:54.473 23:40:24 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:54.473 23:40:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:54.473 23:40:24 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:54.473 23:40:24 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:54.473 23:40:24 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:02:54.473 23:40:24 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:54.473 23:40:24 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.034 00:02:57.034 real 0m3.721s 00:02:57.034 user 0m1.126s 00:02:57.034 sys 0m1.690s 00:02:57.034 23:40:27 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:02:57.034 23:40:27 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:02:57.034 ************************************ 00:02:57.034 END TEST denied 00:02:57.034 ************************************ 00:02:57.034 23:40:27 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:57.034 23:40:27 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:02:57.034 23:40:27 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.034 23:40:27 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:02:57.034 ************************************ 00:02:57.034 START TEST allowed 00:02:57.034 ************************************ 00:02:57.034 23:40:27 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:02:57.034 23:40:27 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:57.034 23:40:27 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:02:57.034 23:40:27 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:57.034 23:40:27 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.034 23:40:27 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:59.563 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:59.563 23:40:29 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:02:59.563 23:40:29 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:02:59.563 23:40:29 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:02:59.563 23:40:29 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:59.563 23:40:29 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:00.939 00:03:00.939 real 0m3.794s 00:03:00.939 user 0m1.022s 00:03:00.939 sys 0m1.626s 00:03:00.939 23:40:31 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:00.939 23:40:31 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:00.939 ************************************ 00:03:00.939 END TEST allowed 00:03:00.939 ************************************ 00:03:00.939 00:03:00.939 real 0m10.193s 00:03:00.939 user 0m3.260s 00:03:00.939 sys 0m4.940s 00:03:00.939 23:40:31 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:00.939 23:40:31 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:00.939 ************************************ 00:03:00.939 END TEST acl 00:03:00.939 ************************************ 00:03:00.939 23:40:31 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:00.939 23:40:31 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:00.939 23:40:31 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.939 23:40:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:00.939 ************************************ 00:03:00.939 START TEST hugepages 00:03:00.939 ************************************ 00:03:00.939 23:40:31 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:00.939 * Looking for test storage... 00:03:00.939 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 43874956 kB' 'MemAvailable: 47353500 kB' 'Buffers: 2704 kB' 'Cached: 10166052 kB' 'SwapCached: 0 kB' 'Active: 7152332 kB' 'Inactive: 3493852 kB' 'Active(anon): 6763428 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480676 kB' 'Mapped: 208936 kB' 'Shmem: 6286000 kB' 'KReclaimable: 179256 kB' 'Slab: 550416 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 371160 kB' 'KernelStack: 12736 kB' 'PageTables: 7716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562316 kB' 'Committed_AS: 7877572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196340 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.940 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:00.941 23:40:31 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:00.941 23:40:31 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:00.941 23:40:31 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:00.941 23:40:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:00.941 ************************************ 00:03:00.941 START TEST default_setup 00:03:00.941 ************************************ 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:00.941 23:40:31 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.316 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:02.316 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:02.316 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:02.316 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:02.316 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:02.316 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:02.316 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:02.316 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:02.316 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:02.316 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:02.316 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:02.316 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:02.316 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:02.316 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:02.316 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:02.316 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:03.254 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.254 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45952620 kB' 'MemAvailable: 49431164 kB' 'Buffers: 2704 kB' 'Cached: 10166144 kB' 'SwapCached: 0 kB' 'Active: 7173616 kB' 'Inactive: 3493852 kB' 'Active(anon): 6784712 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501892 kB' 'Mapped: 209848 kB' 'Shmem: 6286092 kB' 'KReclaimable: 179256 kB' 'Slab: 550032 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370776 kB' 'KernelStack: 12752 kB' 'PageTables: 8000 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7902324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.255 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45951728 kB' 'MemAvailable: 49430272 kB' 'Buffers: 2704 kB' 'Cached: 10166144 kB' 'SwapCached: 0 kB' 'Active: 7175908 kB' 'Inactive: 3493852 kB' 'Active(anon): 6787004 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504232 kB' 'Mapped: 209484 kB' 'Shmem: 6286092 kB' 'KReclaimable: 179256 kB' 'Slab: 550108 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370852 kB' 'KernelStack: 12704 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7904336 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196472 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.256 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.257 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45951036 kB' 'MemAvailable: 49429580 kB' 'Buffers: 2704 kB' 'Cached: 10166164 kB' 'SwapCached: 0 kB' 'Active: 7170496 kB' 'Inactive: 3493852 kB' 'Active(anon): 6781592 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498824 kB' 'Mapped: 209032 kB' 'Shmem: 6286112 kB' 'KReclaimable: 179256 kB' 'Slab: 550040 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370784 kB' 'KernelStack: 12800 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7898236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.258 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.259 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:03.260 nr_hugepages=1024 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.260 resv_hugepages=0 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.260 surplus_hugepages=0 00:03:03.260 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.260 anon_hugepages=0 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45951208 kB' 'MemAvailable: 49429752 kB' 'Buffers: 2704 kB' 'Cached: 10166188 kB' 'SwapCached: 0 kB' 'Active: 7169980 kB' 'Inactive: 3493852 kB' 'Active(anon): 6781076 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498232 kB' 'Mapped: 208956 kB' 'Shmem: 6286136 kB' 'KReclaimable: 179256 kB' 'Slab: 550048 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370792 kB' 'KernelStack: 12752 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7898260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.261 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:03.262 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20229008 kB' 'MemUsed: 12647932 kB' 'SwapCached: 0 kB' 'Active: 6049116 kB' 'Inactive: 3248472 kB' 'Active(anon): 5838380 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8967372 kB' 'Mapped: 172968 kB' 'AnonPages: 333440 kB' 'Shmem: 5508164 kB' 'KernelStack: 7688 kB' 'PageTables: 5440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115888 kB' 'Slab: 358264 kB' 'SReclaimable: 115888 kB' 'SUnreclaim: 242376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.263 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.264 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:03.521 node0=1024 expecting 1024 00:03:03.521 23:40:33 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:03.521 00:03:03.521 real 0m2.531s 00:03:03.521 user 0m0.696s 00:03:03.522 sys 0m0.903s 00:03:03.522 23:40:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:03.522 23:40:33 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:03.522 ************************************ 00:03:03.522 END TEST default_setup 00:03:03.522 ************************************ 00:03:03.522 23:40:33 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:03.522 23:40:33 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:03.522 23:40:33 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:03.522 23:40:33 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:03.522 ************************************ 00:03:03.522 START TEST per_node_1G_alloc 00:03:03.522 ************************************ 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.522 23:40:33 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:04.454 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:04.454 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:04.454 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:04.454 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:04.454 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:04.454 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:04.454 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:04.454 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:04.454 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:04.454 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:04.454 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:04.454 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:04.454 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:04.454 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:04.454 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:04.454 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:04.454 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:04.715 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:04.715 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:04.715 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:04.715 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:04.715 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:04.715 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:04.715 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:04.715 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:04.715 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:04.715 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:04.715 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:04.715 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:04.715 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45947700 kB' 'MemAvailable: 49426244 kB' 'Buffers: 2704 kB' 'Cached: 10166264 kB' 'SwapCached: 0 kB' 'Active: 7170584 kB' 'Inactive: 3493852 kB' 'Active(anon): 6781680 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498736 kB' 'Mapped: 209112 kB' 'Shmem: 6286212 kB' 'KReclaimable: 179256 kB' 'Slab: 550200 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370944 kB' 'KernelStack: 12784 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7898604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.716 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45948224 kB' 'MemAvailable: 49426768 kB' 'Buffers: 2704 kB' 'Cached: 10166264 kB' 'SwapCached: 0 kB' 'Active: 7170952 kB' 'Inactive: 3493852 kB' 'Active(anon): 6782048 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499148 kB' 'Mapped: 209044 kB' 'Shmem: 6286212 kB' 'KReclaimable: 179256 kB' 'Slab: 550160 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370904 kB' 'KernelStack: 12800 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7898624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.717 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.718 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45948476 kB' 'MemAvailable: 49427020 kB' 'Buffers: 2704 kB' 'Cached: 10166268 kB' 'SwapCached: 0 kB' 'Active: 7170056 kB' 'Inactive: 3493852 kB' 'Active(anon): 6781152 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498208 kB' 'Mapped: 208968 kB' 'Shmem: 6286216 kB' 'KReclaimable: 179256 kB' 'Slab: 550196 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370940 kB' 'KernelStack: 12832 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7898644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.719 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.720 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:04.721 nr_hugepages=1024 00:03:04.721 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:04.722 resv_hugepages=0 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:04.722 surplus_hugepages=0 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:04.722 anon_hugepages=0 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45950004 kB' 'MemAvailable: 49428548 kB' 'Buffers: 2704 kB' 'Cached: 10166324 kB' 'SwapCached: 0 kB' 'Active: 7170380 kB' 'Inactive: 3493852 kB' 'Active(anon): 6781476 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498564 kB' 'Mapped: 208968 kB' 'Shmem: 6286272 kB' 'KReclaimable: 179256 kB' 'Slab: 550196 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370940 kB' 'KernelStack: 12832 kB' 'PageTables: 8148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7898668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.722 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.723 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21289464 kB' 'MemUsed: 11587476 kB' 'SwapCached: 0 kB' 'Active: 6048864 kB' 'Inactive: 3248472 kB' 'Active(anon): 5838128 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8967424 kB' 'Mapped: 172980 kB' 'AnonPages: 333104 kB' 'Shmem: 5508216 kB' 'KernelStack: 7672 kB' 'PageTables: 5388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115888 kB' 'Slab: 358212 kB' 'SReclaimable: 115888 kB' 'SUnreclaim: 242324 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.983 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.984 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24660540 kB' 'MemUsed: 3004248 kB' 'SwapCached: 0 kB' 'Active: 1121504 kB' 'Inactive: 245380 kB' 'Active(anon): 943336 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1201628 kB' 'Mapped: 35988 kB' 'AnonPages: 165368 kB' 'Shmem: 778080 kB' 'KernelStack: 5144 kB' 'PageTables: 2712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63368 kB' 'Slab: 191984 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 128616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.985 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:04.986 node0=512 expecting 512 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:04.986 node1=512 expecting 512 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:04.986 00:03:04.986 real 0m1.457s 00:03:04.986 user 0m0.626s 00:03:04.986 sys 0m0.792s 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:04.986 23:40:35 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:04.986 ************************************ 00:03:04.986 END TEST per_node_1G_alloc 00:03:04.986 ************************************ 00:03:04.986 23:40:35 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:04.986 23:40:35 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:04.986 23:40:35 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.986 23:40:35 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:04.986 ************************************ 00:03:04.986 START TEST even_2G_alloc 00:03:04.986 ************************************ 00:03:04.986 23:40:35 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:04.986 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:04.986 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:04.986 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.987 23:40:35 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.921 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:05.921 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.921 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:05.921 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:05.921 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:05.921 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:05.921 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:05.921 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:05.921 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:05.921 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:05.921 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:05.921 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:05.921 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:05.921 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:06.183 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:06.183 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:06.183 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45944268 kB' 'MemAvailable: 49422812 kB' 'Buffers: 2704 kB' 'Cached: 10166392 kB' 'SwapCached: 0 kB' 'Active: 7170812 kB' 'Inactive: 3493852 kB' 'Active(anon): 6781908 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498760 kB' 'Mapped: 209048 kB' 'Shmem: 6286340 kB' 'KReclaimable: 179256 kB' 'Slab: 550036 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370780 kB' 'KernelStack: 12816 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7898860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.183 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.184 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45944544 kB' 'MemAvailable: 49423088 kB' 'Buffers: 2704 kB' 'Cached: 10166396 kB' 'SwapCached: 0 kB' 'Active: 7170328 kB' 'Inactive: 3493852 kB' 'Active(anon): 6781424 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498280 kB' 'Mapped: 208928 kB' 'Shmem: 6286344 kB' 'KReclaimable: 179256 kB' 'Slab: 549916 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370660 kB' 'KernelStack: 12832 kB' 'PageTables: 8052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7898880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.185 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.186 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45945248 kB' 'MemAvailable: 49423792 kB' 'Buffers: 2704 kB' 'Cached: 10166412 kB' 'SwapCached: 0 kB' 'Active: 7170708 kB' 'Inactive: 3493852 kB' 'Active(anon): 6781804 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498596 kB' 'Mapped: 209016 kB' 'Shmem: 6286360 kB' 'KReclaimable: 179256 kB' 'Slab: 550056 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370800 kB' 'KernelStack: 12832 kB' 'PageTables: 8104 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7898900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.187 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.188 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:06.189 nr_hugepages=1024 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:06.189 resv_hugepages=0 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:06.189 surplus_hugepages=0 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:06.189 anon_hugepages=0 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45944996 kB' 'MemAvailable: 49423540 kB' 'Buffers: 2704 kB' 'Cached: 10166436 kB' 'SwapCached: 0 kB' 'Active: 7170712 kB' 'Inactive: 3493852 kB' 'Active(anon): 6781808 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498628 kB' 'Mapped: 209016 kB' 'Shmem: 6286384 kB' 'KReclaimable: 179256 kB' 'Slab: 550056 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370800 kB' 'KernelStack: 12848 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7898924 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.189 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.190 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.451 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21292328 kB' 'MemUsed: 11584612 kB' 'SwapCached: 0 kB' 'Active: 6049336 kB' 'Inactive: 3248472 kB' 'Active(anon): 5838600 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8967492 kB' 'Mapped: 173028 kB' 'AnonPages: 333464 kB' 'Shmem: 5508284 kB' 'KernelStack: 7688 kB' 'PageTables: 5392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115888 kB' 'Slab: 358156 kB' 'SReclaimable: 115888 kB' 'SUnreclaim: 242268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.452 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24658788 kB' 'MemUsed: 3006000 kB' 'SwapCached: 0 kB' 'Active: 1121376 kB' 'Inactive: 245380 kB' 'Active(anon): 943208 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1201668 kB' 'Mapped: 35988 kB' 'AnonPages: 165148 kB' 'Shmem: 778120 kB' 'KernelStack: 5160 kB' 'PageTables: 2760 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63368 kB' 'Slab: 191900 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 128532 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.453 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.454 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:06.455 node0=512 expecting 512 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:06.455 node1=512 expecting 512 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:06.455 00:03:06.455 real 0m1.433s 00:03:06.455 user 0m0.581s 00:03:06.455 sys 0m0.812s 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:06.455 23:40:36 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:06.455 ************************************ 00:03:06.455 END TEST even_2G_alloc 00:03:06.455 ************************************ 00:03:06.455 23:40:36 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:06.455 23:40:36 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:06.455 23:40:36 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:06.455 23:40:36 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:06.455 ************************************ 00:03:06.455 START TEST odd_alloc 00:03:06.455 ************************************ 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.455 23:40:36 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:07.836 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:07.836 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:07.836 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:07.836 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:07.836 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:07.836 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:07.836 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:07.836 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:07.836 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:07.836 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:07.836 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:07.836 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:07.836 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:07.836 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:07.836 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:07.836 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:07.836 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:07.836 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:07.836 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:07.836 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:07.836 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45936388 kB' 'MemAvailable: 49414932 kB' 'Buffers: 2704 kB' 'Cached: 10166532 kB' 'SwapCached: 0 kB' 'Active: 7168812 kB' 'Inactive: 3493852 kB' 'Active(anon): 6779908 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496584 kB' 'Mapped: 208304 kB' 'Shmem: 6286480 kB' 'KReclaimable: 179256 kB' 'Slab: 549988 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370732 kB' 'KernelStack: 12928 kB' 'PageTables: 9044 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 7885304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.837 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45938848 kB' 'MemAvailable: 49417392 kB' 'Buffers: 2704 kB' 'Cached: 10166532 kB' 'SwapCached: 0 kB' 'Active: 7168268 kB' 'Inactive: 3493852 kB' 'Active(anon): 6779364 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495980 kB' 'Mapped: 208212 kB' 'Shmem: 6286480 kB' 'KReclaimable: 179256 kB' 'Slab: 550020 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370764 kB' 'KernelStack: 12784 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 7885324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.838 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.839 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45939548 kB' 'MemAvailable: 49418092 kB' 'Buffers: 2704 kB' 'Cached: 10166540 kB' 'SwapCached: 0 kB' 'Active: 7168000 kB' 'Inactive: 3493852 kB' 'Active(anon): 6779096 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495768 kB' 'Mapped: 208196 kB' 'Shmem: 6286488 kB' 'KReclaimable: 179256 kB' 'Slab: 550020 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370764 kB' 'KernelStack: 12768 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 7886368 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.840 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.841 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:07.842 nr_hugepages=1025 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:07.842 resv_hugepages=0 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:07.842 surplus_hugepages=0 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:07.842 anon_hugepages=0 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45936344 kB' 'MemAvailable: 49414888 kB' 'Buffers: 2704 kB' 'Cached: 10166588 kB' 'SwapCached: 0 kB' 'Active: 7170620 kB' 'Inactive: 3493852 kB' 'Active(anon): 6781716 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498420 kB' 'Mapped: 208960 kB' 'Shmem: 6286536 kB' 'KReclaimable: 179256 kB' 'Slab: 550020 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370764 kB' 'KernelStack: 12704 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609868 kB' 'Committed_AS: 7889360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.842 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.843 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21289212 kB' 'MemUsed: 11587728 kB' 'SwapCached: 0 kB' 'Active: 6047344 kB' 'Inactive: 3248472 kB' 'Active(anon): 5836608 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8967632 kB' 'Mapped: 172968 kB' 'AnonPages: 331288 kB' 'Shmem: 5508424 kB' 'KernelStack: 7576 kB' 'PageTables: 4936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115888 kB' 'Slab: 358140 kB' 'SReclaimable: 115888 kB' 'SUnreclaim: 242252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.844 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:07.845 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 24647208 kB' 'MemUsed: 3017580 kB' 'SwapCached: 0 kB' 'Active: 1119908 kB' 'Inactive: 245380 kB' 'Active(anon): 941740 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1201664 kB' 'Mapped: 36080 kB' 'AnonPages: 163692 kB' 'Shmem: 778116 kB' 'KernelStack: 5112 kB' 'PageTables: 2644 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63368 kB' 'Slab: 191880 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 128512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.846 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:07.847 node0=512 expecting 513 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:07.847 node1=513 expecting 512 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:07.847 00:03:07.847 real 0m1.484s 00:03:07.847 user 0m0.650s 00:03:07.847 sys 0m0.794s 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:07.847 23:40:38 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:07.847 ************************************ 00:03:07.847 END TEST odd_alloc 00:03:07.847 ************************************ 00:03:07.847 23:40:38 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:07.847 23:40:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:07.847 23:40:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.847 23:40:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:07.847 ************************************ 00:03:07.847 START TEST custom_alloc 00:03:07.847 ************************************ 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:07.847 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.848 23:40:38 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:09.226 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:09.226 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:09.226 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:09.226 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:09.226 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:09.226 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:09.226 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:09.226 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:09.226 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:09.226 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:09.226 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:09.226 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:09.226 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:09.226 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:09.226 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:09.226 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:09.226 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.226 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44883032 kB' 'MemAvailable: 48361576 kB' 'Buffers: 2704 kB' 'Cached: 10166660 kB' 'SwapCached: 0 kB' 'Active: 7167748 kB' 'Inactive: 3493852 kB' 'Active(anon): 6778844 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495476 kB' 'Mapped: 208240 kB' 'Shmem: 6286608 kB' 'KReclaimable: 179256 kB' 'Slab: 550024 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370768 kB' 'KernelStack: 12784 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 7885432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.227 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44884608 kB' 'MemAvailable: 48363152 kB' 'Buffers: 2704 kB' 'Cached: 10166660 kB' 'SwapCached: 0 kB' 'Active: 7168116 kB' 'Inactive: 3493852 kB' 'Active(anon): 6779212 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495836 kB' 'Mapped: 208208 kB' 'Shmem: 6286608 kB' 'KReclaimable: 179256 kB' 'Slab: 550020 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370764 kB' 'KernelStack: 12784 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 7885452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.228 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.229 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44884612 kB' 'MemAvailable: 48363156 kB' 'Buffers: 2704 kB' 'Cached: 10166680 kB' 'SwapCached: 0 kB' 'Active: 7168180 kB' 'Inactive: 3493852 kB' 'Active(anon): 6779276 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495864 kB' 'Mapped: 208208 kB' 'Shmem: 6286628 kB' 'KReclaimable: 179256 kB' 'Slab: 550112 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370856 kB' 'KernelStack: 12800 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 7885472 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.230 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.231 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:09.232 nr_hugepages=1536 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:09.232 resv_hugepages=0 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:09.232 surplus_hugepages=0 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:09.232 anon_hugepages=0 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 44884612 kB' 'MemAvailable: 48363156 kB' 'Buffers: 2704 kB' 'Cached: 10166680 kB' 'SwapCached: 0 kB' 'Active: 7167572 kB' 'Inactive: 3493852 kB' 'Active(anon): 6778668 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495256 kB' 'Mapped: 208208 kB' 'Shmem: 6286628 kB' 'KReclaimable: 179256 kB' 'Slab: 550112 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370856 kB' 'KernelStack: 12784 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086604 kB' 'Committed_AS: 7885492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.232 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.233 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.234 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21287780 kB' 'MemUsed: 11589160 kB' 'SwapCached: 0 kB' 'Active: 6047832 kB' 'Inactive: 3248472 kB' 'Active(anon): 5837096 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8967724 kB' 'Mapped: 172280 kB' 'AnonPages: 331704 kB' 'Shmem: 5508516 kB' 'KernelStack: 7640 kB' 'PageTables: 5172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115888 kB' 'Slab: 358120 kB' 'SReclaimable: 115888 kB' 'SUnreclaim: 242232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.494 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.495 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664788 kB' 'MemFree: 23597524 kB' 'MemUsed: 4067264 kB' 'SwapCached: 0 kB' 'Active: 1119896 kB' 'Inactive: 245380 kB' 'Active(anon): 941728 kB' 'Inactive(anon): 0 kB' 'Active(file): 178168 kB' 'Inactive(file): 245380 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1201664 kB' 'Mapped: 35928 kB' 'AnonPages: 163712 kB' 'Shmem: 778116 kB' 'KernelStack: 5144 kB' 'PageTables: 2656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 63368 kB' 'Slab: 191992 kB' 'SReclaimable: 63368 kB' 'SUnreclaim: 128624 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.496 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:09.497 node0=512 expecting 512 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:09.497 node1=1024 expecting 1024 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:09.497 00:03:09.497 real 0m1.455s 00:03:09.497 user 0m0.653s 00:03:09.497 sys 0m0.747s 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:09.497 23:40:39 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:09.497 ************************************ 00:03:09.497 END TEST custom_alloc 00:03:09.497 ************************************ 00:03:09.497 23:40:39 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:09.497 23:40:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:09.497 23:40:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:09.497 23:40:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:09.497 ************************************ 00:03:09.497 START TEST no_shrink_alloc 00:03:09.497 ************************************ 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.497 23:40:39 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:10.876 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:10.876 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:10.876 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:10.876 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:10.876 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:10.876 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:10.876 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:10.876 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:10.876 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:10.876 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:10.876 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:10.876 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:10.876 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:10.876 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:10.876 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:10.876 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:10.876 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:10.876 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:10.876 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:10.876 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:10.876 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:10.876 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:10.876 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:10.876 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:10.876 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:10.876 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:10.876 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45939992 kB' 'MemAvailable: 49418536 kB' 'Buffers: 2704 kB' 'Cached: 10166784 kB' 'SwapCached: 0 kB' 'Active: 7168676 kB' 'Inactive: 3493852 kB' 'Active(anon): 6779772 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496168 kB' 'Mapped: 208252 kB' 'Shmem: 6286732 kB' 'KReclaimable: 179256 kB' 'Slab: 550088 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370832 kB' 'KernelStack: 12784 kB' 'PageTables: 7856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7885324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.877 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45941020 kB' 'MemAvailable: 49419564 kB' 'Buffers: 2704 kB' 'Cached: 10166788 kB' 'SwapCached: 0 kB' 'Active: 7168288 kB' 'Inactive: 3493852 kB' 'Active(anon): 6779384 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495816 kB' 'Mapped: 208220 kB' 'Shmem: 6286736 kB' 'KReclaimable: 179256 kB' 'Slab: 550080 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370824 kB' 'KernelStack: 12768 kB' 'PageTables: 7792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7885340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.878 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.879 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45941336 kB' 'MemAvailable: 49419880 kB' 'Buffers: 2704 kB' 'Cached: 10166816 kB' 'SwapCached: 0 kB' 'Active: 7168528 kB' 'Inactive: 3493852 kB' 'Active(anon): 6779624 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496168 kB' 'Mapped: 208220 kB' 'Shmem: 6286764 kB' 'KReclaimable: 179256 kB' 'Slab: 550028 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370772 kB' 'KernelStack: 12784 kB' 'PageTables: 7840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7885868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.880 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.881 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:10.882 nr_hugepages=1024 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:10.882 resv_hugepages=0 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:10.882 surplus_hugepages=0 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:10.882 anon_hugepages=0 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45941336 kB' 'MemAvailable: 49419880 kB' 'Buffers: 2704 kB' 'Cached: 10166836 kB' 'SwapCached: 0 kB' 'Active: 7168348 kB' 'Inactive: 3493852 kB' 'Active(anon): 6779444 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 495920 kB' 'Mapped: 208220 kB' 'Shmem: 6286784 kB' 'KReclaimable: 179256 kB' 'Slab: 550028 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370772 kB' 'KernelStack: 12752 kB' 'PageTables: 7744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7885888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.882 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.883 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20248360 kB' 'MemUsed: 12628580 kB' 'SwapCached: 0 kB' 'Active: 6048368 kB' 'Inactive: 3248472 kB' 'Active(anon): 5837632 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8967892 kB' 'Mapped: 172292 kB' 'AnonPages: 332100 kB' 'Shmem: 5508684 kB' 'KernelStack: 7640 kB' 'PageTables: 5180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115888 kB' 'Slab: 358184 kB' 'SReclaimable: 115888 kB' 'SUnreclaim: 242296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.884 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.885 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:10.886 node0=1024 expecting 1024 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.886 23:40:41 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:12.348 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:12.348 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:12.348 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:12.348 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:12.348 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:12.348 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:12.348 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:12.348 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:12.348 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:12.348 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:12.348 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:12.348 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:12.348 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:12.348 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:12.348 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:12.348 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:12.348 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:12.348 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.348 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45965692 kB' 'MemAvailable: 49444236 kB' 'Buffers: 2704 kB' 'Cached: 10166900 kB' 'SwapCached: 0 kB' 'Active: 7168888 kB' 'Inactive: 3493852 kB' 'Active(anon): 6779984 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496316 kB' 'Mapped: 208336 kB' 'Shmem: 6286848 kB' 'KReclaimable: 179256 kB' 'Slab: 549816 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370560 kB' 'KernelStack: 12736 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7886120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.349 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45964916 kB' 'MemAvailable: 49443460 kB' 'Buffers: 2704 kB' 'Cached: 10166904 kB' 'SwapCached: 0 kB' 'Active: 7168752 kB' 'Inactive: 3493852 kB' 'Active(anon): 6779848 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496216 kB' 'Mapped: 208304 kB' 'Shmem: 6286852 kB' 'KReclaimable: 179256 kB' 'Slab: 549816 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370560 kB' 'KernelStack: 12784 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7886136 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.350 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.351 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45965200 kB' 'MemAvailable: 49443744 kB' 'Buffers: 2704 kB' 'Cached: 10166904 kB' 'SwapCached: 0 kB' 'Active: 7169416 kB' 'Inactive: 3493852 kB' 'Active(anon): 6780512 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496452 kB' 'Mapped: 208244 kB' 'Shmem: 6286852 kB' 'KReclaimable: 179256 kB' 'Slab: 549824 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370568 kB' 'KernelStack: 12912 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7888520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196628 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.352 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.353 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:12.354 nr_hugepages=1024 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:12.354 resv_hugepages=0 00:03:12.354 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:12.355 surplus_hugepages=0 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:12.355 anon_hugepages=0 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541728 kB' 'MemFree: 45965012 kB' 'MemAvailable: 49443556 kB' 'Buffers: 2704 kB' 'Cached: 10166940 kB' 'SwapCached: 0 kB' 'Active: 7168912 kB' 'Inactive: 3493852 kB' 'Active(anon): 6780008 kB' 'Inactive(anon): 0 kB' 'Active(file): 388904 kB' 'Inactive(file): 3493852 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496336 kB' 'Mapped: 208244 kB' 'Shmem: 6286888 kB' 'KReclaimable: 179256 kB' 'Slab: 549824 kB' 'SReclaimable: 179256 kB' 'SUnreclaim: 370568 kB' 'KernelStack: 13136 kB' 'PageTables: 8388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610892 kB' 'Committed_AS: 7888540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196692 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2154076 kB' 'DirectMap2M: 16640000 kB' 'DirectMap1G: 50331648 kB' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.355 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:12.356 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 20257780 kB' 'MemUsed: 12619160 kB' 'SwapCached: 0 kB' 'Active: 6048596 kB' 'Inactive: 3248472 kB' 'Active(anon): 5837860 kB' 'Inactive(anon): 0 kB' 'Active(file): 210736 kB' 'Inactive(file): 3248472 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8967996 kB' 'Mapped: 172300 kB' 'AnonPages: 332252 kB' 'Shmem: 5508788 kB' 'KernelStack: 7640 kB' 'PageTables: 5216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115888 kB' 'Slab: 358116 kB' 'SReclaimable: 115888 kB' 'SUnreclaim: 242228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.357 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:12.358 node0=1024 expecting 1024 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:12.358 00:03:12.358 real 0m2.830s 00:03:12.358 user 0m1.176s 00:03:12.358 sys 0m1.566s 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:12.358 23:40:42 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:12.358 ************************************ 00:03:12.358 END TEST no_shrink_alloc 00:03:12.358 ************************************ 00:03:12.358 23:40:42 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:12.358 23:40:42 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:12.358 23:40:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.358 23:40:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.358 23:40:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.358 23:40:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.358 23:40:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.358 23:40:42 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.358 23:40:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.358 23:40:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.358 23:40:42 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.358 23:40:42 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.358 23:40:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:12.358 23:40:42 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:12.358 00:03:12.358 real 0m11.575s 00:03:12.358 user 0m4.538s 00:03:12.358 sys 0m5.861s 00:03:12.358 23:40:42 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:12.358 23:40:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:12.358 ************************************ 00:03:12.358 END TEST hugepages 00:03:12.358 ************************************ 00:03:12.358 23:40:42 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:12.358 23:40:42 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:12.358 23:40:42 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:12.358 23:40:42 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:12.358 ************************************ 00:03:12.358 START TEST driver 00:03:12.358 ************************************ 00:03:12.359 23:40:42 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:12.359 * Looking for test storage... 00:03:12.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:12.359 23:40:42 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:12.359 23:40:42 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:12.359 23:40:42 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.888 23:40:45 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:14.888 23:40:45 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.888 23:40:45 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.888 23:40:45 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:14.888 ************************************ 00:03:14.888 START TEST guess_driver 00:03:14.888 ************************************ 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:14.888 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:14.888 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:14.888 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:14.888 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:14.888 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:14.888 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:14.888 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:14.888 Looking for driver=vfio-pci 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.888 23:40:45 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:16.261 23:40:46 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.194 23:40:47 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:17.194 23:40:47 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:17.194 23:40:47 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:17.194 23:40:47 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:17.195 23:40:47 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:17.195 23:40:47 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.195 23:40:47 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:19.722 00:03:19.722 real 0m4.807s 00:03:19.722 user 0m1.079s 00:03:19.722 sys 0m1.852s 00:03:19.722 23:40:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.722 23:40:50 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:19.722 ************************************ 00:03:19.722 END TEST guess_driver 00:03:19.722 ************************************ 00:03:19.722 00:03:19.722 real 0m7.347s 00:03:19.722 user 0m1.623s 00:03:19.722 sys 0m2.883s 00:03:19.722 23:40:50 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:19.722 23:40:50 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:19.722 ************************************ 00:03:19.722 END TEST driver 00:03:19.722 ************************************ 00:03:19.722 23:40:50 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:19.722 23:40:50 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.722 23:40:50 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.722 23:40:50 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:19.722 ************************************ 00:03:19.722 START TEST devices 00:03:19.722 ************************************ 00:03:19.722 23:40:50 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:19.722 * Looking for test storage... 00:03:19.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:19.722 23:40:50 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:19.722 23:40:50 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:19.722 23:40:50 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:19.722 23:40:50 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:21.095 23:40:51 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:21.095 23:40:51 setup.sh.devices -- common/autotest_common.sh@1667 -- # zoned_devs=() 00:03:21.095 23:40:51 setup.sh.devices -- common/autotest_common.sh@1667 -- # local -gA zoned_devs 00:03:21.095 23:40:51 setup.sh.devices -- common/autotest_common.sh@1668 -- # local nvme bdf 00:03:21.095 23:40:51 setup.sh.devices -- common/autotest_common.sh@1670 -- # for nvme in /sys/block/nvme* 00:03:21.095 23:40:51 setup.sh.devices -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:21.095 23:40:51 setup.sh.devices -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:03:21.095 23:40:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:21.095 23:40:51 setup.sh.devices -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:03:21.095 23:40:51 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:21.095 23:40:51 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:21.095 23:40:51 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:21.095 23:40:51 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:21.095 23:40:51 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:21.095 23:40:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:21.095 23:40:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:21.095 23:40:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:21.095 23:40:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:21.095 23:40:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:21.095 23:40:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:21.095 23:40:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:21.095 23:40:51 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:21.353 No valid GPT data, bailing 00:03:21.353 23:40:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:21.353 23:40:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:21.353 23:40:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:21.353 23:40:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:21.353 23:40:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:21.353 23:40:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:21.353 23:40:51 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:21.353 23:40:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:21.353 23:40:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:21.353 23:40:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:21.353 23:40:51 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:21.353 23:40:51 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:21.353 23:40:51 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:21.353 23:40:51 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:21.353 23:40:51 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:21.353 23:40:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:21.353 ************************************ 00:03:21.353 START TEST nvme_mount 00:03:21.353 ************************************ 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:21.353 23:40:51 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:21.354 23:40:51 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:22.289 Creating new GPT entries in memory. 00:03:22.289 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:22.289 other utilities. 00:03:22.289 23:40:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:22.289 23:40:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:22.289 23:40:52 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:22.289 23:40:52 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:22.289 23:40:52 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:23.224 Creating new GPT entries in memory. 00:03:23.224 The operation has completed successfully. 00:03:23.224 23:40:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:23.224 23:40:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:23.224 23:40:53 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 3240363 00:03:23.224 23:40:53 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.224 23:40:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:23.224 23:40:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.482 23:40:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.415 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.416 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.416 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.416 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.416 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.416 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.416 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.416 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.416 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.416 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.416 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.416 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.416 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.416 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:24.416 23:40:54 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.674 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:24.674 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:24.674 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.674 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:24.674 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:24.674 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:24.674 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.674 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.674 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:24.674 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:24.674 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:24.674 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:24.674 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:24.933 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:24.933 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:24.933 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:24.933 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.933 23:40:55 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.307 23:40:56 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.242 23:40:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.501 23:40:58 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:27.501 23:40:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:27.501 23:40:58 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:27.501 23:40:58 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:27.501 23:40:58 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:27.501 23:40:58 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:27.501 23:40:58 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:27.501 23:40:58 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:27.501 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:27.501 00:03:27.501 real 0m6.263s 00:03:27.501 user 0m1.456s 00:03:27.501 sys 0m2.361s 00:03:27.501 23:40:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.501 23:40:58 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:27.501 ************************************ 00:03:27.501 END TEST nvme_mount 00:03:27.501 ************************************ 00:03:27.501 23:40:58 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:27.501 23:40:58 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.501 23:40:58 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.501 23:40:58 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:27.501 ************************************ 00:03:27.501 START TEST dm_mount 00:03:27.501 ************************************ 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:27.501 23:40:58 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:28.872 Creating new GPT entries in memory. 00:03:28.872 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:28.872 other utilities. 00:03:28.872 23:40:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:28.872 23:40:59 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:28.872 23:40:59 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:28.872 23:40:59 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:28.872 23:40:59 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:29.804 Creating new GPT entries in memory. 00:03:29.804 The operation has completed successfully. 00:03:29.804 23:41:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:29.804 23:41:00 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:29.804 23:41:00 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:29.804 23:41:00 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:29.804 23:41:00 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:30.737 The operation has completed successfully. 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 3242755 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:30.737 23:41:01 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:31.668 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.926 23:41:02 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:33.438 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:33.438 00:03:33.438 real 0m5.688s 00:03:33.438 user 0m1.003s 00:03:33.438 sys 0m1.538s 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.438 23:41:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:33.438 ************************************ 00:03:33.438 END TEST dm_mount 00:03:33.438 ************************************ 00:03:33.438 23:41:03 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:33.438 23:41:03 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:33.438 23:41:03 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:33.438 23:41:03 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:33.438 23:41:03 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:33.438 23:41:03 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:33.438 23:41:03 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:33.696 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:33.696 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:33.696 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:33.696 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:33.696 23:41:04 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:33.696 23:41:04 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:33.696 23:41:04 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:33.696 23:41:04 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:33.696 23:41:04 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:33.696 23:41:04 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:33.696 23:41:04 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:33.696 00:03:33.696 real 0m13.866s 00:03:33.696 user 0m3.103s 00:03:33.696 sys 0m4.934s 00:03:33.696 23:41:04 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.696 23:41:04 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:33.696 ************************************ 00:03:33.696 END TEST devices 00:03:33.696 ************************************ 00:03:33.696 00:03:33.696 real 0m43.211s 00:03:33.696 user 0m12.626s 00:03:33.696 sys 0m18.762s 00:03:33.696 23:41:04 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:33.696 23:41:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:33.696 ************************************ 00:03:33.696 END TEST setup.sh 00:03:33.696 ************************************ 00:03:33.696 23:41:04 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:35.067 Hugepages 00:03:35.067 node hugesize free / total 00:03:35.067 node0 1048576kB 0 / 0 00:03:35.067 node0 2048kB 2048 / 2048 00:03:35.067 node1 1048576kB 0 / 0 00:03:35.067 node1 2048kB 0 / 0 00:03:35.067 00:03:35.067 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:35.067 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:35.067 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:35.067 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:35.067 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:35.067 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:35.067 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:35.067 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:35.067 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:35.067 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:35.067 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:35.067 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:35.067 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:35.067 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:35.067 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:35.067 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:35.068 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:35.068 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:35.068 23:41:05 -- spdk/autotest.sh@130 -- # uname -s 00:03:35.068 23:41:05 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:35.068 23:41:05 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:35.068 23:41:05 -- common/autotest_common.sh@1529 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:36.000 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:36.000 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:36.000 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:36.258 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:36.258 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:36.258 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:36.258 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:36.258 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:36.258 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:36.258 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:36.258 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:36.258 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:36.258 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:36.258 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:36.258 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:36.258 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:37.192 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:37.450 23:41:07 -- common/autotest_common.sh@1530 -- # sleep 1 00:03:38.383 23:41:08 -- common/autotest_common.sh@1531 -- # bdfs=() 00:03:38.383 23:41:08 -- common/autotest_common.sh@1531 -- # local bdfs 00:03:38.383 23:41:08 -- common/autotest_common.sh@1532 -- # bdfs=($(get_nvme_bdfs)) 00:03:38.383 23:41:08 -- common/autotest_common.sh@1532 -- # get_nvme_bdfs 00:03:38.383 23:41:08 -- common/autotest_common.sh@1511 -- # bdfs=() 00:03:38.383 23:41:08 -- common/autotest_common.sh@1511 -- # local bdfs 00:03:38.383 23:41:08 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:38.383 23:41:08 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:38.383 23:41:08 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:03:38.383 23:41:08 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:03:38.383 23:41:08 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:88:00.0 00:03:38.383 23:41:08 -- common/autotest_common.sh@1534 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.316 Waiting for block devices as requested 00:03:39.574 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:39.574 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:39.832 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:39.832 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:39.832 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:39.832 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:40.090 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:40.090 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:40.090 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:40.090 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:40.348 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:40.348 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:40.348 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:40.348 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:40.606 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:40.606 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:40.606 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:40.864 23:41:11 -- common/autotest_common.sh@1536 -- # for bdf in "${bdfs[@]}" 00:03:40.864 23:41:11 -- common/autotest_common.sh@1537 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:40.864 23:41:11 -- common/autotest_common.sh@1500 -- # readlink -f /sys/class/nvme/nvme0 00:03:40.864 23:41:11 -- common/autotest_common.sh@1500 -- # grep 0000:88:00.0/nvme/nvme 00:03:40.864 23:41:11 -- common/autotest_common.sh@1500 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:40.864 23:41:11 -- common/autotest_common.sh@1501 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:40.864 23:41:11 -- common/autotest_common.sh@1505 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:40.864 23:41:11 -- common/autotest_common.sh@1505 -- # printf '%s\n' nvme0 00:03:40.864 23:41:11 -- common/autotest_common.sh@1537 -- # nvme_ctrlr=/dev/nvme0 00:03:40.864 23:41:11 -- common/autotest_common.sh@1538 -- # [[ -z /dev/nvme0 ]] 00:03:40.864 23:41:11 -- common/autotest_common.sh@1543 -- # nvme id-ctrl /dev/nvme0 00:03:40.864 23:41:11 -- common/autotest_common.sh@1543 -- # grep oacs 00:03:40.864 23:41:11 -- common/autotest_common.sh@1543 -- # cut -d: -f2 00:03:40.864 23:41:11 -- common/autotest_common.sh@1543 -- # oacs=' 0xf' 00:03:40.864 23:41:11 -- common/autotest_common.sh@1544 -- # oacs_ns_manage=8 00:03:40.864 23:41:11 -- common/autotest_common.sh@1546 -- # [[ 8 -ne 0 ]] 00:03:40.864 23:41:11 -- common/autotest_common.sh@1552 -- # nvme id-ctrl /dev/nvme0 00:03:40.864 23:41:11 -- common/autotest_common.sh@1552 -- # grep unvmcap 00:03:40.864 23:41:11 -- common/autotest_common.sh@1552 -- # cut -d: -f2 00:03:40.864 23:41:11 -- common/autotest_common.sh@1552 -- # unvmcap=' 0' 00:03:40.864 23:41:11 -- common/autotest_common.sh@1553 -- # [[ 0 -eq 0 ]] 00:03:40.864 23:41:11 -- common/autotest_common.sh@1555 -- # continue 00:03:40.864 23:41:11 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:40.864 23:41:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:40.864 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:03:40.864 23:41:11 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:40.864 23:41:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:40.864 23:41:11 -- common/autotest_common.sh@10 -- # set +x 00:03:40.864 23:41:11 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:42.239 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:42.239 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:42.239 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:42.239 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:42.239 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:42.239 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:42.239 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:42.239 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:42.239 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:42.239 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:42.239 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:42.239 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:42.239 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:42.239 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:42.239 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:42.239 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:43.174 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:43.174 23:41:13 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:43.174 23:41:13 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:43.174 23:41:13 -- common/autotest_common.sh@10 -- # set +x 00:03:43.174 23:41:13 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:43.174 23:41:13 -- common/autotest_common.sh@1589 -- # mapfile -t bdfs 00:03:43.174 23:41:13 -- common/autotest_common.sh@1589 -- # get_nvme_bdfs_by_id 0x0a54 00:03:43.174 23:41:13 -- common/autotest_common.sh@1575 -- # bdfs=() 00:03:43.174 23:41:13 -- common/autotest_common.sh@1575 -- # local bdfs 00:03:43.174 23:41:13 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs 00:03:43.174 23:41:13 -- common/autotest_common.sh@1511 -- # bdfs=() 00:03:43.174 23:41:13 -- common/autotest_common.sh@1511 -- # local bdfs 00:03:43.174 23:41:13 -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:43.174 23:41:13 -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:43.174 23:41:13 -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:03:43.432 23:41:13 -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:03:43.432 23:41:13 -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:88:00.0 00:03:43.432 23:41:13 -- common/autotest_common.sh@1577 -- # for bdf in $(get_nvme_bdfs) 00:03:43.432 23:41:13 -- common/autotest_common.sh@1578 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:43.432 23:41:13 -- common/autotest_common.sh@1578 -- # device=0x0a54 00:03:43.432 23:41:13 -- common/autotest_common.sh@1579 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:43.432 23:41:13 -- common/autotest_common.sh@1580 -- # bdfs+=($bdf) 00:03:43.432 23:41:13 -- common/autotest_common.sh@1584 -- # printf '%s\n' 0000:88:00.0 00:03:43.432 23:41:13 -- common/autotest_common.sh@1590 -- # [[ -z 0000:88:00.0 ]] 00:03:43.432 23:41:13 -- common/autotest_common.sh@1595 -- # spdk_tgt_pid=3247948 00:03:43.432 23:41:13 -- common/autotest_common.sh@1594 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:43.432 23:41:13 -- common/autotest_common.sh@1596 -- # waitforlisten 3247948 00:03:43.432 23:41:13 -- common/autotest_common.sh@829 -- # '[' -z 3247948 ']' 00:03:43.432 23:41:13 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:43.432 23:41:13 -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:43.432 23:41:13 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:43.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:43.432 23:41:13 -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:43.432 23:41:13 -- common/autotest_common.sh@10 -- # set +x 00:03:43.432 [2024-07-24 23:41:13.891562] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:03:43.432 [2024-07-24 23:41:13.891647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247948 ] 00:03:43.432 EAL: No free 2048 kB hugepages reported on node 1 00:03:43.432 [2024-07-24 23:41:13.952668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:43.691 [2024-07-24 23:41:14.068647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:44.256 23:41:14 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:44.256 23:41:14 -- common/autotest_common.sh@862 -- # return 0 00:03:44.256 23:41:14 -- common/autotest_common.sh@1598 -- # bdf_id=0 00:03:44.256 23:41:14 -- common/autotest_common.sh@1599 -- # for bdf in "${bdfs[@]}" 00:03:44.256 23:41:14 -- common/autotest_common.sh@1600 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:47.535 nvme0n1 00:03:47.535 23:41:17 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:47.535 [2024-07-24 23:41:18.120131] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:47.535 [2024-07-24 23:41:18.120177] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:47.535 request: 00:03:47.535 { 00:03:47.535 "nvme_ctrlr_name": "nvme0", 00:03:47.535 "password": "test", 00:03:47.535 "method": "bdev_nvme_opal_revert", 00:03:47.535 "req_id": 1 00:03:47.535 } 00:03:47.535 Got JSON-RPC error response 00:03:47.535 response: 00:03:47.535 { 00:03:47.535 "code": -32603, 00:03:47.535 "message": "Internal error" 00:03:47.535 } 00:03:47.536 23:41:18 -- common/autotest_common.sh@1602 -- # true 00:03:47.536 23:41:18 -- common/autotest_common.sh@1603 -- # (( ++bdf_id )) 00:03:47.536 23:41:18 -- common/autotest_common.sh@1606 -- # killprocess 3247948 00:03:47.536 23:41:18 -- common/autotest_common.sh@948 -- # '[' -z 3247948 ']' 00:03:47.536 23:41:18 -- common/autotest_common.sh@952 -- # kill -0 3247948 00:03:47.536 23:41:18 -- common/autotest_common.sh@953 -- # uname 00:03:47.536 23:41:18 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:47.536 23:41:18 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3247948 00:03:47.795 23:41:18 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:47.795 23:41:18 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:47.795 23:41:18 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3247948' 00:03:47.795 killing process with pid 3247948 00:03:47.795 23:41:18 -- common/autotest_common.sh@967 -- # kill 3247948 00:03:47.795 23:41:18 -- common/autotest_common.sh@972 -- # wait 3247948 00:03:49.693 23:41:19 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:49.693 23:41:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:49.693 23:41:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:49.693 23:41:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:49.693 23:41:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:49.693 23:41:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:49.693 23:41:19 -- common/autotest_common.sh@10 -- # set +x 00:03:49.693 23:41:19 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:49.693 23:41:19 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:49.693 23:41:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.693 23:41:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.693 23:41:19 -- common/autotest_common.sh@10 -- # set +x 00:03:49.693 ************************************ 00:03:49.693 START TEST env 00:03:49.693 ************************************ 00:03:49.693 23:41:20 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:49.693 * Looking for test storage... 00:03:49.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:49.693 23:41:20 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:49.693 23:41:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.693 23:41:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.693 23:41:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.693 ************************************ 00:03:49.693 START TEST env_memory 00:03:49.693 ************************************ 00:03:49.693 23:41:20 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:49.693 00:03:49.693 00:03:49.693 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.693 http://cunit.sourceforge.net/ 00:03:49.693 00:03:49.693 00:03:49.693 Suite: memory 00:03:49.693 Test: alloc and free memory map ...[2024-07-24 23:41:20.114670] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:49.693 passed 00:03:49.693 Test: mem map translation ...[2024-07-24 23:41:20.135749] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:49.693 [2024-07-24 23:41:20.135771] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:49.693 [2024-07-24 23:41:20.135828] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:49.693 [2024-07-24 23:41:20.135841] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:49.693 passed 00:03:49.693 Test: mem map registration ...[2024-07-24 23:41:20.179200] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:49.693 [2024-07-24 23:41:20.179220] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:49.693 passed 00:03:49.693 Test: mem map adjacent registrations ...passed 00:03:49.693 00:03:49.693 Run Summary: Type Total Ran Passed Failed Inactive 00:03:49.693 suites 1 1 n/a 0 0 00:03:49.693 tests 4 4 4 0 0 00:03:49.693 asserts 152 152 152 0 n/a 00:03:49.693 00:03:49.693 Elapsed time = 0.145 seconds 00:03:49.693 00:03:49.693 real 0m0.152s 00:03:49.693 user 0m0.140s 00:03:49.693 sys 0m0.012s 00:03:49.693 23:41:20 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:49.693 23:41:20 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:49.693 ************************************ 00:03:49.693 END TEST env_memory 00:03:49.693 ************************************ 00:03:49.693 23:41:20 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:49.693 23:41:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.693 23:41:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.693 23:41:20 env -- common/autotest_common.sh@10 -- # set +x 00:03:49.693 ************************************ 00:03:49.693 START TEST env_vtophys 00:03:49.693 ************************************ 00:03:49.693 23:41:20 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:49.693 EAL: lib.eal log level changed from notice to debug 00:03:49.693 EAL: Detected lcore 0 as core 0 on socket 0 00:03:49.693 EAL: Detected lcore 1 as core 1 on socket 0 00:03:49.693 EAL: Detected lcore 2 as core 2 on socket 0 00:03:49.693 EAL: Detected lcore 3 as core 3 on socket 0 00:03:49.693 EAL: Detected lcore 4 as core 4 on socket 0 00:03:49.693 EAL: Detected lcore 5 as core 5 on socket 0 00:03:49.693 EAL: Detected lcore 6 as core 8 on socket 0 00:03:49.693 EAL: Detected lcore 7 as core 9 on socket 0 00:03:49.693 EAL: Detected lcore 8 as core 10 on socket 0 00:03:49.693 EAL: Detected lcore 9 as core 11 on socket 0 00:03:49.693 EAL: Detected lcore 10 as core 12 on socket 0 00:03:49.693 EAL: Detected lcore 11 as core 13 on socket 0 00:03:49.693 EAL: Detected lcore 12 as core 0 on socket 1 00:03:49.693 EAL: Detected lcore 13 as core 1 on socket 1 00:03:49.693 EAL: Detected lcore 14 as core 2 on socket 1 00:03:49.693 EAL: Detected lcore 15 as core 3 on socket 1 00:03:49.693 EAL: Detected lcore 16 as core 4 on socket 1 00:03:49.693 EAL: Detected lcore 17 as core 5 on socket 1 00:03:49.693 EAL: Detected lcore 18 as core 8 on socket 1 00:03:49.693 EAL: Detected lcore 19 as core 9 on socket 1 00:03:49.693 EAL: Detected lcore 20 as core 10 on socket 1 00:03:49.693 EAL: Detected lcore 21 as core 11 on socket 1 00:03:49.693 EAL: Detected lcore 22 as core 12 on socket 1 00:03:49.693 EAL: Detected lcore 23 as core 13 on socket 1 00:03:49.693 EAL: Detected lcore 24 as core 0 on socket 0 00:03:49.693 EAL: Detected lcore 25 as core 1 on socket 0 00:03:49.693 EAL: Detected lcore 26 as core 2 on socket 0 00:03:49.693 EAL: Detected lcore 27 as core 3 on socket 0 00:03:49.693 EAL: Detected lcore 28 as core 4 on socket 0 00:03:49.693 EAL: Detected lcore 29 as core 5 on socket 0 00:03:49.693 EAL: Detected lcore 30 as core 8 on socket 0 00:03:49.693 EAL: Detected lcore 31 as core 9 on socket 0 00:03:49.693 EAL: Detected lcore 32 as core 10 on socket 0 00:03:49.693 EAL: Detected lcore 33 as core 11 on socket 0 00:03:49.693 EAL: Detected lcore 34 as core 12 on socket 0 00:03:49.693 EAL: Detected lcore 35 as core 13 on socket 0 00:03:49.693 EAL: Detected lcore 36 as core 0 on socket 1 00:03:49.693 EAL: Detected lcore 37 as core 1 on socket 1 00:03:49.693 EAL: Detected lcore 38 as core 2 on socket 1 00:03:49.693 EAL: Detected lcore 39 as core 3 on socket 1 00:03:49.693 EAL: Detected lcore 40 as core 4 on socket 1 00:03:49.693 EAL: Detected lcore 41 as core 5 on socket 1 00:03:49.693 EAL: Detected lcore 42 as core 8 on socket 1 00:03:49.693 EAL: Detected lcore 43 as core 9 on socket 1 00:03:49.693 EAL: Detected lcore 44 as core 10 on socket 1 00:03:49.693 EAL: Detected lcore 45 as core 11 on socket 1 00:03:49.693 EAL: Detected lcore 46 as core 12 on socket 1 00:03:49.693 EAL: Detected lcore 47 as core 13 on socket 1 00:03:49.952 EAL: Maximum logical cores by configuration: 128 00:03:49.952 EAL: Detected CPU lcores: 48 00:03:49.952 EAL: Detected NUMA nodes: 2 00:03:49.952 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:49.952 EAL: Detected shared linkage of DPDK 00:03:49.952 EAL: No shared files mode enabled, IPC will be disabled 00:03:49.952 EAL: Bus pci wants IOVA as 'DC' 00:03:49.952 EAL: Buses did not request a specific IOVA mode. 00:03:49.952 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:49.952 EAL: Selected IOVA mode 'VA' 00:03:49.952 EAL: No free 2048 kB hugepages reported on node 1 00:03:49.952 EAL: Probing VFIO support... 00:03:49.952 EAL: IOMMU type 1 (Type 1) is supported 00:03:49.952 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:49.952 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:49.952 EAL: VFIO support initialized 00:03:49.952 EAL: Ask a virtual area of 0x2e000 bytes 00:03:49.952 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:49.952 EAL: Setting up physically contiguous memory... 00:03:49.952 EAL: Setting maximum number of open files to 524288 00:03:49.952 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:49.952 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:49.952 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:49.952 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.952 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:49.952 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.952 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.952 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:49.952 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:49.952 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.952 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:49.952 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.952 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.952 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:49.952 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:49.952 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.952 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:49.952 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.952 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.952 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:49.952 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:49.952 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.952 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:49.952 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:49.952 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.952 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:49.952 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:49.952 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:49.952 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.952 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:49.952 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:49.952 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.952 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:49.952 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:49.952 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.952 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:49.952 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:49.952 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.952 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:49.952 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:49.952 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.952 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:49.952 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:49.952 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.952 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:49.952 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:49.952 EAL: Ask a virtual area of 0x61000 bytes 00:03:49.952 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:49.952 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:49.952 EAL: Ask a virtual area of 0x400000000 bytes 00:03:49.952 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:49.952 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:49.952 EAL: Hugepages will be freed exactly as allocated. 00:03:49.952 EAL: No shared files mode enabled, IPC is disabled 00:03:49.952 EAL: No shared files mode enabled, IPC is disabled 00:03:49.952 EAL: TSC frequency is ~2700000 KHz 00:03:49.952 EAL: Main lcore 0 is ready (tid=7fbcce7e4a00;cpuset=[0]) 00:03:49.952 EAL: Trying to obtain current memory policy. 00:03:49.952 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.952 EAL: Restoring previous memory policy: 0 00:03:49.952 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was expanded by 2MB 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:49.953 EAL: Mem event callback 'spdk:(nil)' registered 00:03:49.953 00:03:49.953 00:03:49.953 CUnit - A unit testing framework for C - Version 2.1-3 00:03:49.953 http://cunit.sourceforge.net/ 00:03:49.953 00:03:49.953 00:03:49.953 Suite: components_suite 00:03:49.953 Test: vtophys_malloc_test ...passed 00:03:49.953 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:49.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.953 EAL: Restoring previous memory policy: 4 00:03:49.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.953 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was expanded by 4MB 00:03:49.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.953 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was shrunk by 4MB 00:03:49.953 EAL: Trying to obtain current memory policy. 00:03:49.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.953 EAL: Restoring previous memory policy: 4 00:03:49.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.953 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was expanded by 6MB 00:03:49.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.953 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was shrunk by 6MB 00:03:49.953 EAL: Trying to obtain current memory policy. 00:03:49.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.953 EAL: Restoring previous memory policy: 4 00:03:49.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.953 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was expanded by 10MB 00:03:49.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.953 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was shrunk by 10MB 00:03:49.953 EAL: Trying to obtain current memory policy. 00:03:49.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.953 EAL: Restoring previous memory policy: 4 00:03:49.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.953 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was expanded by 18MB 00:03:49.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.953 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was shrunk by 18MB 00:03:49.953 EAL: Trying to obtain current memory policy. 00:03:49.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.953 EAL: Restoring previous memory policy: 4 00:03:49.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.953 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was expanded by 34MB 00:03:49.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.953 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was shrunk by 34MB 00:03:49.953 EAL: Trying to obtain current memory policy. 00:03:49.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.953 EAL: Restoring previous memory policy: 4 00:03:49.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.953 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was expanded by 66MB 00:03:49.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.953 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was shrunk by 66MB 00:03:49.953 EAL: Trying to obtain current memory policy. 00:03:49.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:49.953 EAL: Restoring previous memory policy: 4 00:03:49.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.953 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was expanded by 130MB 00:03:49.953 EAL: Calling mem event callback 'spdk:(nil)' 00:03:49.953 EAL: request: mp_malloc_sync 00:03:49.953 EAL: No shared files mode enabled, IPC is disabled 00:03:49.953 EAL: Heap on socket 0 was shrunk by 130MB 00:03:49.953 EAL: Trying to obtain current memory policy. 00:03:49.953 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.212 EAL: Restoring previous memory policy: 4 00:03:50.212 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.212 EAL: request: mp_malloc_sync 00:03:50.212 EAL: No shared files mode enabled, IPC is disabled 00:03:50.212 EAL: Heap on socket 0 was expanded by 258MB 00:03:50.212 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.212 EAL: request: mp_malloc_sync 00:03:50.212 EAL: No shared files mode enabled, IPC is disabled 00:03:50.212 EAL: Heap on socket 0 was shrunk by 258MB 00:03:50.212 EAL: Trying to obtain current memory policy. 00:03:50.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.476 EAL: Restoring previous memory policy: 4 00:03:50.476 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.476 EAL: request: mp_malloc_sync 00:03:50.476 EAL: No shared files mode enabled, IPC is disabled 00:03:50.476 EAL: Heap on socket 0 was expanded by 514MB 00:03:50.476 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.476 EAL: request: mp_malloc_sync 00:03:50.476 EAL: No shared files mode enabled, IPC is disabled 00:03:50.476 EAL: Heap on socket 0 was shrunk by 514MB 00:03:50.476 EAL: Trying to obtain current memory policy. 00:03:50.476 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:50.734 EAL: Restoring previous memory policy: 4 00:03:50.734 EAL: Calling mem event callback 'spdk:(nil)' 00:03:50.734 EAL: request: mp_malloc_sync 00:03:50.734 EAL: No shared files mode enabled, IPC is disabled 00:03:50.734 EAL: Heap on socket 0 was expanded by 1026MB 00:03:50.991 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.251 EAL: request: mp_malloc_sync 00:03:51.251 EAL: No shared files mode enabled, IPC is disabled 00:03:51.251 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:51.251 passed 00:03:51.251 00:03:51.251 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.251 suites 1 1 n/a 0 0 00:03:51.251 tests 2 2 2 0 0 00:03:51.251 asserts 497 497 497 0 n/a 00:03:51.251 00:03:51.251 Elapsed time = 1.368 seconds 00:03:51.251 EAL: Calling mem event callback 'spdk:(nil)' 00:03:51.251 EAL: request: mp_malloc_sync 00:03:51.251 EAL: No shared files mode enabled, IPC is disabled 00:03:51.251 EAL: Heap on socket 0 was shrunk by 2MB 00:03:51.251 EAL: No shared files mode enabled, IPC is disabled 00:03:51.251 EAL: No shared files mode enabled, IPC is disabled 00:03:51.251 EAL: No shared files mode enabled, IPC is disabled 00:03:51.251 00:03:51.251 real 0m1.488s 00:03:51.251 user 0m0.861s 00:03:51.251 sys 0m0.591s 00:03:51.251 23:41:21 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.251 23:41:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:51.251 ************************************ 00:03:51.251 END TEST env_vtophys 00:03:51.251 ************************************ 00:03:51.251 23:41:21 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:51.251 23:41:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:51.251 23:41:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.251 23:41:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.251 ************************************ 00:03:51.251 START TEST env_pci 00:03:51.251 ************************************ 00:03:51.251 23:41:21 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:51.251 00:03:51.251 00:03:51.251 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.251 http://cunit.sourceforge.net/ 00:03:51.251 00:03:51.251 00:03:51.251 Suite: pci 00:03:51.251 Test: pci_hook ...[2024-07-24 23:41:21.833825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3248965 has claimed it 00:03:51.251 EAL: Cannot find device (10000:00:01.0) 00:03:51.251 EAL: Failed to attach device on primary process 00:03:51.251 passed 00:03:51.251 00:03:51.251 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.251 suites 1 1 n/a 0 0 00:03:51.251 tests 1 1 1 0 0 00:03:51.251 asserts 25 25 25 0 n/a 00:03:51.251 00:03:51.251 Elapsed time = 0.019 seconds 00:03:51.251 00:03:51.251 real 0m0.030s 00:03:51.251 user 0m0.009s 00:03:51.251 sys 0m0.021s 00:03:51.251 23:41:21 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:51.251 23:41:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:51.251 ************************************ 00:03:51.251 END TEST env_pci 00:03:51.251 ************************************ 00:03:51.552 23:41:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:51.552 23:41:21 env -- env/env.sh@15 -- # uname 00:03:51.552 23:41:21 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:51.552 23:41:21 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:51.552 23:41:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:51.552 23:41:21 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:03:51.552 23:41:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.552 23:41:21 env -- common/autotest_common.sh@10 -- # set +x 00:03:51.552 ************************************ 00:03:51.552 START TEST env_dpdk_post_init 00:03:51.552 ************************************ 00:03:51.552 23:41:21 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:51.552 EAL: Detected CPU lcores: 48 00:03:51.552 EAL: Detected NUMA nodes: 2 00:03:51.552 EAL: Detected shared linkage of DPDK 00:03:51.552 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:51.552 EAL: Selected IOVA mode 'VA' 00:03:51.552 EAL: No free 2048 kB hugepages reported on node 1 00:03:51.552 EAL: VFIO support initialized 00:03:51.552 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:51.552 EAL: Using IOMMU type 1 (Type 1) 00:03:51.552 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:51.552 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:51.553 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:51.553 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:51.553 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:51.553 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:51.553 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:51.553 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:51.553 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:51.553 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:51.553 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:51.811 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:51.811 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:51.811 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:51.811 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:51.811 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:52.378 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:55.662 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:55.662 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:55.925 Starting DPDK initialization... 00:03:55.925 Starting SPDK post initialization... 00:03:55.925 SPDK NVMe probe 00:03:55.925 Attaching to 0000:88:00.0 00:03:55.925 Attached to 0000:88:00.0 00:03:55.925 Cleaning up... 00:03:55.925 00:03:55.925 real 0m4.408s 00:03:55.925 user 0m3.268s 00:03:55.925 sys 0m0.191s 00:03:55.925 23:41:26 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.925 23:41:26 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:55.925 ************************************ 00:03:55.925 END TEST env_dpdk_post_init 00:03:55.925 ************************************ 00:03:55.925 23:41:26 env -- env/env.sh@26 -- # uname 00:03:55.925 23:41:26 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:55.925 23:41:26 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.925 23:41:26 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.925 23:41:26 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.925 23:41:26 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.925 ************************************ 00:03:55.925 START TEST env_mem_callbacks 00:03:55.925 ************************************ 00:03:55.925 23:41:26 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:55.925 EAL: Detected CPU lcores: 48 00:03:55.925 EAL: Detected NUMA nodes: 2 00:03:55.925 EAL: Detected shared linkage of DPDK 00:03:55.925 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:55.925 EAL: Selected IOVA mode 'VA' 00:03:55.925 EAL: No free 2048 kB hugepages reported on node 1 00:03:55.925 EAL: VFIO support initialized 00:03:55.925 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:55.925 00:03:55.925 00:03:55.925 CUnit - A unit testing framework for C - Version 2.1-3 00:03:55.925 http://cunit.sourceforge.net/ 00:03:55.925 00:03:55.925 00:03:55.925 Suite: memory 00:03:55.925 Test: test ... 00:03:55.925 register 0x200000200000 2097152 00:03:55.925 malloc 3145728 00:03:55.925 register 0x200000400000 4194304 00:03:55.925 buf 0x200000500000 len 3145728 PASSED 00:03:55.925 malloc 64 00:03:55.925 buf 0x2000004fff40 len 64 PASSED 00:03:55.925 malloc 4194304 00:03:55.925 register 0x200000800000 6291456 00:03:55.925 buf 0x200000a00000 len 4194304 PASSED 00:03:55.925 free 0x200000500000 3145728 00:03:55.925 free 0x2000004fff40 64 00:03:55.925 unregister 0x200000400000 4194304 PASSED 00:03:55.925 free 0x200000a00000 4194304 00:03:55.925 unregister 0x200000800000 6291456 PASSED 00:03:55.925 malloc 8388608 00:03:55.925 register 0x200000400000 10485760 00:03:55.925 buf 0x200000600000 len 8388608 PASSED 00:03:55.925 free 0x200000600000 8388608 00:03:55.925 unregister 0x200000400000 10485760 PASSED 00:03:55.925 passed 00:03:55.925 00:03:55.925 Run Summary: Type Total Ran Passed Failed Inactive 00:03:55.925 suites 1 1 n/a 0 0 00:03:55.925 tests 1 1 1 0 0 00:03:55.925 asserts 15 15 15 0 n/a 00:03:55.925 00:03:55.925 Elapsed time = 0.005 seconds 00:03:55.925 00:03:55.925 real 0m0.049s 00:03:55.925 user 0m0.013s 00:03:55.925 sys 0m0.035s 00:03:55.925 23:41:26 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.925 23:41:26 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:55.925 ************************************ 00:03:55.925 END TEST env_mem_callbacks 00:03:55.925 ************************************ 00:03:55.925 00:03:55.925 real 0m6.417s 00:03:55.925 user 0m4.423s 00:03:55.925 sys 0m1.028s 00:03:55.925 23:41:26 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.925 23:41:26 env -- common/autotest_common.sh@10 -- # set +x 00:03:55.925 ************************************ 00:03:55.925 END TEST env 00:03:55.925 ************************************ 00:03:55.925 23:41:26 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:55.925 23:41:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.925 23:41:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.925 23:41:26 -- common/autotest_common.sh@10 -- # set +x 00:03:55.925 ************************************ 00:03:55.925 START TEST rpc 00:03:55.925 ************************************ 00:03:55.925 23:41:26 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:55.925 * Looking for test storage... 00:03:55.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:55.925 23:41:26 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3249619 00:03:55.925 23:41:26 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:55.925 23:41:26 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.925 23:41:26 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3249619 00:03:55.925 23:41:26 rpc -- common/autotest_common.sh@829 -- # '[' -z 3249619 ']' 00:03:55.925 23:41:26 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.925 23:41:26 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:55.925 23:41:26 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.925 23:41:26 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:55.925 23:41:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:56.183 [2024-07-24 23:41:26.577920] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:03:56.183 [2024-07-24 23:41:26.578006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3249619 ] 00:03:56.183 EAL: No free 2048 kB hugepages reported on node 1 00:03:56.183 [2024-07-24 23:41:26.639499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.183 [2024-07-24 23:41:26.754618] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:56.183 [2024-07-24 23:41:26.754681] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3249619' to capture a snapshot of events at runtime. 00:03:56.183 [2024-07-24 23:41:26.754697] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:56.183 [2024-07-24 23:41:26.754710] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:56.183 [2024-07-24 23:41:26.754722] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3249619 for offline analysis/debug. 00:03:56.183 [2024-07-24 23:41:26.754753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.117 23:41:27 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:57.117 23:41:27 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:57.117 23:41:27 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:57.117 23:41:27 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:57.117 23:41:27 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:57.117 23:41:27 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:57.117 23:41:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.117 23:41:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.117 23:41:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.117 ************************************ 00:03:57.117 START TEST rpc_integrity 00:03:57.117 ************************************ 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:57.117 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.117 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:57.117 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:57.117 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:57.117 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.117 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:57.117 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.117 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:57.117 { 00:03:57.117 "name": "Malloc0", 00:03:57.117 "aliases": [ 00:03:57.117 "6a6208a3-0ec9-4dea-b284-9b1fb9fca87e" 00:03:57.117 ], 00:03:57.117 "product_name": "Malloc disk", 00:03:57.117 "block_size": 512, 00:03:57.117 "num_blocks": 16384, 00:03:57.117 "uuid": "6a6208a3-0ec9-4dea-b284-9b1fb9fca87e", 00:03:57.117 "assigned_rate_limits": { 00:03:57.117 "rw_ios_per_sec": 0, 00:03:57.117 "rw_mbytes_per_sec": 0, 00:03:57.117 "r_mbytes_per_sec": 0, 00:03:57.117 "w_mbytes_per_sec": 0 00:03:57.117 }, 00:03:57.117 "claimed": false, 00:03:57.117 "zoned": false, 00:03:57.117 "supported_io_types": { 00:03:57.117 "read": true, 00:03:57.117 "write": true, 00:03:57.117 "unmap": true, 00:03:57.117 "flush": true, 00:03:57.117 "reset": true, 00:03:57.117 "nvme_admin": false, 00:03:57.117 "nvme_io": false, 00:03:57.117 "nvme_io_md": false, 00:03:57.117 "write_zeroes": true, 00:03:57.117 "zcopy": true, 00:03:57.117 "get_zone_info": false, 00:03:57.117 "zone_management": false, 00:03:57.117 "zone_append": false, 00:03:57.117 "compare": false, 00:03:57.117 "compare_and_write": false, 00:03:57.117 "abort": true, 00:03:57.117 "seek_hole": false, 00:03:57.117 "seek_data": false, 00:03:57.117 "copy": true, 00:03:57.117 "nvme_iov_md": false 00:03:57.117 }, 00:03:57.117 "memory_domains": [ 00:03:57.117 { 00:03:57.117 "dma_device_id": "system", 00:03:57.117 "dma_device_type": 1 00:03:57.117 }, 00:03:57.117 { 00:03:57.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.117 "dma_device_type": 2 00:03:57.117 } 00:03:57.117 ], 00:03:57.117 "driver_specific": {} 00:03:57.117 } 00:03:57.117 ]' 00:03:57.117 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:57.117 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:57.117 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.117 [2024-07-24 23:41:27.649967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:57.117 [2024-07-24 23:41:27.650018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:57.117 [2024-07-24 23:41:27.650044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6d0d50 00:03:57.117 [2024-07-24 23:41:27.650060] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:57.117 [2024-07-24 23:41:27.651563] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:57.117 [2024-07-24 23:41:27.651590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:57.117 Passthru0 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.117 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.117 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.117 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:57.117 { 00:03:57.117 "name": "Malloc0", 00:03:57.117 "aliases": [ 00:03:57.117 "6a6208a3-0ec9-4dea-b284-9b1fb9fca87e" 00:03:57.117 ], 00:03:57.117 "product_name": "Malloc disk", 00:03:57.117 "block_size": 512, 00:03:57.117 "num_blocks": 16384, 00:03:57.117 "uuid": "6a6208a3-0ec9-4dea-b284-9b1fb9fca87e", 00:03:57.117 "assigned_rate_limits": { 00:03:57.117 "rw_ios_per_sec": 0, 00:03:57.117 "rw_mbytes_per_sec": 0, 00:03:57.117 "r_mbytes_per_sec": 0, 00:03:57.117 "w_mbytes_per_sec": 0 00:03:57.117 }, 00:03:57.117 "claimed": true, 00:03:57.117 "claim_type": "exclusive_write", 00:03:57.117 "zoned": false, 00:03:57.117 "supported_io_types": { 00:03:57.117 "read": true, 00:03:57.117 "write": true, 00:03:57.117 "unmap": true, 00:03:57.117 "flush": true, 00:03:57.117 "reset": true, 00:03:57.117 "nvme_admin": false, 00:03:57.117 "nvme_io": false, 00:03:57.117 "nvme_io_md": false, 00:03:57.117 "write_zeroes": true, 00:03:57.117 "zcopy": true, 00:03:57.117 "get_zone_info": false, 00:03:57.117 "zone_management": false, 00:03:57.117 "zone_append": false, 00:03:57.117 "compare": false, 00:03:57.117 "compare_and_write": false, 00:03:57.117 "abort": true, 00:03:57.117 "seek_hole": false, 00:03:57.117 "seek_data": false, 00:03:57.117 "copy": true, 00:03:57.117 "nvme_iov_md": false 00:03:57.117 }, 00:03:57.117 "memory_domains": [ 00:03:57.117 { 00:03:57.117 "dma_device_id": "system", 00:03:57.117 "dma_device_type": 1 00:03:57.117 }, 00:03:57.117 { 00:03:57.117 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.117 "dma_device_type": 2 00:03:57.117 } 00:03:57.117 ], 00:03:57.117 "driver_specific": {} 00:03:57.117 }, 00:03:57.117 { 00:03:57.117 "name": "Passthru0", 00:03:57.117 "aliases": [ 00:03:57.117 "32ccf92f-703f-549c-88dc-263666311177" 00:03:57.117 ], 00:03:57.117 "product_name": "passthru", 00:03:57.117 "block_size": 512, 00:03:57.118 "num_blocks": 16384, 00:03:57.118 "uuid": "32ccf92f-703f-549c-88dc-263666311177", 00:03:57.118 "assigned_rate_limits": { 00:03:57.118 "rw_ios_per_sec": 0, 00:03:57.118 "rw_mbytes_per_sec": 0, 00:03:57.118 "r_mbytes_per_sec": 0, 00:03:57.118 "w_mbytes_per_sec": 0 00:03:57.118 }, 00:03:57.118 "claimed": false, 00:03:57.118 "zoned": false, 00:03:57.118 "supported_io_types": { 00:03:57.118 "read": true, 00:03:57.118 "write": true, 00:03:57.118 "unmap": true, 00:03:57.118 "flush": true, 00:03:57.118 "reset": true, 00:03:57.118 "nvme_admin": false, 00:03:57.118 "nvme_io": false, 00:03:57.118 "nvme_io_md": false, 00:03:57.118 "write_zeroes": true, 00:03:57.118 "zcopy": true, 00:03:57.118 "get_zone_info": false, 00:03:57.118 "zone_management": false, 00:03:57.118 "zone_append": false, 00:03:57.118 "compare": false, 00:03:57.118 "compare_and_write": false, 00:03:57.118 "abort": true, 00:03:57.118 "seek_hole": false, 00:03:57.118 "seek_data": false, 00:03:57.118 "copy": true, 00:03:57.118 "nvme_iov_md": false 00:03:57.118 }, 00:03:57.118 "memory_domains": [ 00:03:57.118 { 00:03:57.118 "dma_device_id": "system", 00:03:57.118 "dma_device_type": 1 00:03:57.118 }, 00:03:57.118 { 00:03:57.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.118 "dma_device_type": 2 00:03:57.118 } 00:03:57.118 ], 00:03:57.118 "driver_specific": { 00:03:57.118 "passthru": { 00:03:57.118 "name": "Passthru0", 00:03:57.118 "base_bdev_name": "Malloc0" 00:03:57.118 } 00:03:57.118 } 00:03:57.118 } 00:03:57.118 ]' 00:03:57.118 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:57.118 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:57.118 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:57.118 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.118 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.118 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.118 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:57.118 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.118 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.118 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.118 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:57.118 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.118 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.376 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.376 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:57.376 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:57.376 23:41:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:57.376 00:03:57.376 real 0m0.238s 00:03:57.376 user 0m0.156s 00:03:57.376 sys 0m0.026s 00:03:57.376 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.376 23:41:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.376 ************************************ 00:03:57.376 END TEST rpc_integrity 00:03:57.376 ************************************ 00:03:57.376 23:41:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:57.376 23:41:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.376 23:41:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.376 23:41:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.376 ************************************ 00:03:57.376 START TEST rpc_plugins 00:03:57.376 ************************************ 00:03:57.376 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:57.376 23:41:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:57.376 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.376 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.376 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.376 23:41:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:57.376 23:41:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:57.376 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.376 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.376 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.376 23:41:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:57.376 { 00:03:57.376 "name": "Malloc1", 00:03:57.376 "aliases": [ 00:03:57.376 "41b0c62f-a206-42a0-8a44-9d7ab2aec252" 00:03:57.376 ], 00:03:57.376 "product_name": "Malloc disk", 00:03:57.376 "block_size": 4096, 00:03:57.376 "num_blocks": 256, 00:03:57.376 "uuid": "41b0c62f-a206-42a0-8a44-9d7ab2aec252", 00:03:57.376 "assigned_rate_limits": { 00:03:57.376 "rw_ios_per_sec": 0, 00:03:57.376 "rw_mbytes_per_sec": 0, 00:03:57.376 "r_mbytes_per_sec": 0, 00:03:57.376 "w_mbytes_per_sec": 0 00:03:57.376 }, 00:03:57.376 "claimed": false, 00:03:57.376 "zoned": false, 00:03:57.376 "supported_io_types": { 00:03:57.376 "read": true, 00:03:57.376 "write": true, 00:03:57.376 "unmap": true, 00:03:57.376 "flush": true, 00:03:57.376 "reset": true, 00:03:57.376 "nvme_admin": false, 00:03:57.376 "nvme_io": false, 00:03:57.376 "nvme_io_md": false, 00:03:57.376 "write_zeroes": true, 00:03:57.376 "zcopy": true, 00:03:57.376 "get_zone_info": false, 00:03:57.376 "zone_management": false, 00:03:57.376 "zone_append": false, 00:03:57.376 "compare": false, 00:03:57.376 "compare_and_write": false, 00:03:57.376 "abort": true, 00:03:57.376 "seek_hole": false, 00:03:57.376 "seek_data": false, 00:03:57.376 "copy": true, 00:03:57.376 "nvme_iov_md": false 00:03:57.376 }, 00:03:57.376 "memory_domains": [ 00:03:57.376 { 00:03:57.376 "dma_device_id": "system", 00:03:57.376 "dma_device_type": 1 00:03:57.376 }, 00:03:57.376 { 00:03:57.376 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.376 "dma_device_type": 2 00:03:57.376 } 00:03:57.376 ], 00:03:57.377 "driver_specific": {} 00:03:57.377 } 00:03:57.377 ]' 00:03:57.377 23:41:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:57.377 23:41:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:57.377 23:41:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:57.377 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.377 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.377 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.377 23:41:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:57.377 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.377 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.377 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.377 23:41:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:57.377 23:41:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:57.377 23:41:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:57.377 00:03:57.377 real 0m0.114s 00:03:57.377 user 0m0.072s 00:03:57.377 sys 0m0.014s 00:03:57.377 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.377 23:41:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:57.377 ************************************ 00:03:57.377 END TEST rpc_plugins 00:03:57.377 ************************************ 00:03:57.377 23:41:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:57.377 23:41:27 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.377 23:41:27 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.377 23:41:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.377 ************************************ 00:03:57.377 START TEST rpc_trace_cmd_test 00:03:57.377 ************************************ 00:03:57.377 23:41:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:57.377 23:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:57.377 23:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:57.377 23:41:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.377 23:41:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:57.634 23:41:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.634 23:41:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:57.634 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3249619", 00:03:57.634 "tpoint_group_mask": "0x8", 00:03:57.634 "iscsi_conn": { 00:03:57.634 "mask": "0x2", 00:03:57.634 "tpoint_mask": "0x0" 00:03:57.634 }, 00:03:57.634 "scsi": { 00:03:57.634 "mask": "0x4", 00:03:57.634 "tpoint_mask": "0x0" 00:03:57.634 }, 00:03:57.634 "bdev": { 00:03:57.634 "mask": "0x8", 00:03:57.634 "tpoint_mask": "0xffffffffffffffff" 00:03:57.634 }, 00:03:57.634 "nvmf_rdma": { 00:03:57.634 "mask": "0x10", 00:03:57.634 "tpoint_mask": "0x0" 00:03:57.634 }, 00:03:57.634 "nvmf_tcp": { 00:03:57.634 "mask": "0x20", 00:03:57.634 "tpoint_mask": "0x0" 00:03:57.634 }, 00:03:57.634 "ftl": { 00:03:57.634 "mask": "0x40", 00:03:57.634 "tpoint_mask": "0x0" 00:03:57.634 }, 00:03:57.634 "blobfs": { 00:03:57.634 "mask": "0x80", 00:03:57.634 "tpoint_mask": "0x0" 00:03:57.634 }, 00:03:57.635 "dsa": { 00:03:57.635 "mask": "0x200", 00:03:57.635 "tpoint_mask": "0x0" 00:03:57.635 }, 00:03:57.635 "thread": { 00:03:57.635 "mask": "0x400", 00:03:57.635 "tpoint_mask": "0x0" 00:03:57.635 }, 00:03:57.635 "nvme_pcie": { 00:03:57.635 "mask": "0x800", 00:03:57.635 "tpoint_mask": "0x0" 00:03:57.635 }, 00:03:57.635 "iaa": { 00:03:57.635 "mask": "0x1000", 00:03:57.635 "tpoint_mask": "0x0" 00:03:57.635 }, 00:03:57.635 "nvme_tcp": { 00:03:57.635 "mask": "0x2000", 00:03:57.635 "tpoint_mask": "0x0" 00:03:57.635 }, 00:03:57.635 "bdev_nvme": { 00:03:57.635 "mask": "0x4000", 00:03:57.635 "tpoint_mask": "0x0" 00:03:57.635 }, 00:03:57.635 "sock": { 00:03:57.635 "mask": "0x8000", 00:03:57.635 "tpoint_mask": "0x0" 00:03:57.635 } 00:03:57.635 }' 00:03:57.635 23:41:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:57.635 23:41:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:03:57.635 23:41:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:57.635 23:41:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:57.635 23:41:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:57.635 23:41:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:57.635 23:41:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:57.635 23:41:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:57.635 23:41:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:57.635 23:41:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:57.635 00:03:57.635 real 0m0.205s 00:03:57.635 user 0m0.186s 00:03:57.635 sys 0m0.009s 00:03:57.635 23:41:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.635 23:41:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:57.635 ************************************ 00:03:57.635 END TEST rpc_trace_cmd_test 00:03:57.635 ************************************ 00:03:57.635 23:41:28 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:57.635 23:41:28 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:57.635 23:41:28 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:57.635 23:41:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:57.635 23:41:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:57.635 23:41:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:57.635 ************************************ 00:03:57.635 START TEST rpc_daemon_integrity 00:03:57.635 ************************************ 00:03:57.635 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:57.635 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:57.635 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.635 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.635 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.635 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:57.635 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:57.893 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:57.893 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:57.893 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.893 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.893 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.893 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:57.893 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:57.893 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.893 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.893 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.893 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:57.893 { 00:03:57.893 "name": "Malloc2", 00:03:57.893 "aliases": [ 00:03:57.893 "a05d7d12-ac60-4505-ba80-905d90ef4168" 00:03:57.893 ], 00:03:57.893 "product_name": "Malloc disk", 00:03:57.893 "block_size": 512, 00:03:57.893 "num_blocks": 16384, 00:03:57.893 "uuid": "a05d7d12-ac60-4505-ba80-905d90ef4168", 00:03:57.893 "assigned_rate_limits": { 00:03:57.893 "rw_ios_per_sec": 0, 00:03:57.893 "rw_mbytes_per_sec": 0, 00:03:57.893 "r_mbytes_per_sec": 0, 00:03:57.893 "w_mbytes_per_sec": 0 00:03:57.893 }, 00:03:57.893 "claimed": false, 00:03:57.893 "zoned": false, 00:03:57.893 "supported_io_types": { 00:03:57.893 "read": true, 00:03:57.893 "write": true, 00:03:57.893 "unmap": true, 00:03:57.893 "flush": true, 00:03:57.893 "reset": true, 00:03:57.893 "nvme_admin": false, 00:03:57.893 "nvme_io": false, 00:03:57.893 "nvme_io_md": false, 00:03:57.893 "write_zeroes": true, 00:03:57.893 "zcopy": true, 00:03:57.893 "get_zone_info": false, 00:03:57.893 "zone_management": false, 00:03:57.893 "zone_append": false, 00:03:57.893 "compare": false, 00:03:57.893 "compare_and_write": false, 00:03:57.893 "abort": true, 00:03:57.893 "seek_hole": false, 00:03:57.893 "seek_data": false, 00:03:57.893 "copy": true, 00:03:57.893 "nvme_iov_md": false 00:03:57.893 }, 00:03:57.893 "memory_domains": [ 00:03:57.893 { 00:03:57.893 "dma_device_id": "system", 00:03:57.893 "dma_device_type": 1 00:03:57.893 }, 00:03:57.893 { 00:03:57.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.894 "dma_device_type": 2 00:03:57.894 } 00:03:57.894 ], 00:03:57.894 "driver_specific": {} 00:03:57.894 } 00:03:57.894 ]' 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.894 [2024-07-24 23:41:28.343943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:57.894 [2024-07-24 23:41:28.343986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:57.894 [2024-07-24 23:41:28.344023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6d0980 00:03:57.894 [2024-07-24 23:41:28.344054] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:57.894 [2024-07-24 23:41:28.345356] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:57.894 [2024-07-24 23:41:28.345381] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:57.894 Passthru0 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:57.894 { 00:03:57.894 "name": "Malloc2", 00:03:57.894 "aliases": [ 00:03:57.894 "a05d7d12-ac60-4505-ba80-905d90ef4168" 00:03:57.894 ], 00:03:57.894 "product_name": "Malloc disk", 00:03:57.894 "block_size": 512, 00:03:57.894 "num_blocks": 16384, 00:03:57.894 "uuid": "a05d7d12-ac60-4505-ba80-905d90ef4168", 00:03:57.894 "assigned_rate_limits": { 00:03:57.894 "rw_ios_per_sec": 0, 00:03:57.894 "rw_mbytes_per_sec": 0, 00:03:57.894 "r_mbytes_per_sec": 0, 00:03:57.894 "w_mbytes_per_sec": 0 00:03:57.894 }, 00:03:57.894 "claimed": true, 00:03:57.894 "claim_type": "exclusive_write", 00:03:57.894 "zoned": false, 00:03:57.894 "supported_io_types": { 00:03:57.894 "read": true, 00:03:57.894 "write": true, 00:03:57.894 "unmap": true, 00:03:57.894 "flush": true, 00:03:57.894 "reset": true, 00:03:57.894 "nvme_admin": false, 00:03:57.894 "nvme_io": false, 00:03:57.894 "nvme_io_md": false, 00:03:57.894 "write_zeroes": true, 00:03:57.894 "zcopy": true, 00:03:57.894 "get_zone_info": false, 00:03:57.894 "zone_management": false, 00:03:57.894 "zone_append": false, 00:03:57.894 "compare": false, 00:03:57.894 "compare_and_write": false, 00:03:57.894 "abort": true, 00:03:57.894 "seek_hole": false, 00:03:57.894 "seek_data": false, 00:03:57.894 "copy": true, 00:03:57.894 "nvme_iov_md": false 00:03:57.894 }, 00:03:57.894 "memory_domains": [ 00:03:57.894 { 00:03:57.894 "dma_device_id": "system", 00:03:57.894 "dma_device_type": 1 00:03:57.894 }, 00:03:57.894 { 00:03:57.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.894 "dma_device_type": 2 00:03:57.894 } 00:03:57.894 ], 00:03:57.894 "driver_specific": {} 00:03:57.894 }, 00:03:57.894 { 00:03:57.894 "name": "Passthru0", 00:03:57.894 "aliases": [ 00:03:57.894 "e8aa09c9-caf3-5050-9cb3-b202e8d8af91" 00:03:57.894 ], 00:03:57.894 "product_name": "passthru", 00:03:57.894 "block_size": 512, 00:03:57.894 "num_blocks": 16384, 00:03:57.894 "uuid": "e8aa09c9-caf3-5050-9cb3-b202e8d8af91", 00:03:57.894 "assigned_rate_limits": { 00:03:57.894 "rw_ios_per_sec": 0, 00:03:57.894 "rw_mbytes_per_sec": 0, 00:03:57.894 "r_mbytes_per_sec": 0, 00:03:57.894 "w_mbytes_per_sec": 0 00:03:57.894 }, 00:03:57.894 "claimed": false, 00:03:57.894 "zoned": false, 00:03:57.894 "supported_io_types": { 00:03:57.894 "read": true, 00:03:57.894 "write": true, 00:03:57.894 "unmap": true, 00:03:57.894 "flush": true, 00:03:57.894 "reset": true, 00:03:57.894 "nvme_admin": false, 00:03:57.894 "nvme_io": false, 00:03:57.894 "nvme_io_md": false, 00:03:57.894 "write_zeroes": true, 00:03:57.894 "zcopy": true, 00:03:57.894 "get_zone_info": false, 00:03:57.894 "zone_management": false, 00:03:57.894 "zone_append": false, 00:03:57.894 "compare": false, 00:03:57.894 "compare_and_write": false, 00:03:57.894 "abort": true, 00:03:57.894 "seek_hole": false, 00:03:57.894 "seek_data": false, 00:03:57.894 "copy": true, 00:03:57.894 "nvme_iov_md": false 00:03:57.894 }, 00:03:57.894 "memory_domains": [ 00:03:57.894 { 00:03:57.894 "dma_device_id": "system", 00:03:57.894 "dma_device_type": 1 00:03:57.894 }, 00:03:57.894 { 00:03:57.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:57.894 "dma_device_type": 2 00:03:57.894 } 00:03:57.894 ], 00:03:57.894 "driver_specific": { 00:03:57.894 "passthru": { 00:03:57.894 "name": "Passthru0", 00:03:57.894 "base_bdev_name": "Malloc2" 00:03:57.894 } 00:03:57.894 } 00:03:57.894 } 00:03:57.894 ]' 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:57.894 00:03:57.894 real 0m0.231s 00:03:57.894 user 0m0.150s 00:03:57.894 sys 0m0.024s 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:57.894 23:41:28 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:57.894 ************************************ 00:03:57.894 END TEST rpc_daemon_integrity 00:03:57.894 ************************************ 00:03:57.894 23:41:28 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:57.894 23:41:28 rpc -- rpc/rpc.sh@84 -- # killprocess 3249619 00:03:57.894 23:41:28 rpc -- common/autotest_common.sh@948 -- # '[' -z 3249619 ']' 00:03:57.894 23:41:28 rpc -- common/autotest_common.sh@952 -- # kill -0 3249619 00:03:57.894 23:41:28 rpc -- common/autotest_common.sh@953 -- # uname 00:03:57.894 23:41:28 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:03:57.894 23:41:28 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3249619 00:03:58.152 23:41:28 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:03:58.152 23:41:28 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:03:58.152 23:41:28 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3249619' 00:03:58.152 killing process with pid 3249619 00:03:58.152 23:41:28 rpc -- common/autotest_common.sh@967 -- # kill 3249619 00:03:58.152 23:41:28 rpc -- common/autotest_common.sh@972 -- # wait 3249619 00:03:58.410 00:03:58.411 real 0m2.501s 00:03:58.411 user 0m3.198s 00:03:58.411 sys 0m0.626s 00:03:58.411 23:41:28 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:58.411 23:41:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.411 ************************************ 00:03:58.411 END TEST rpc 00:03:58.411 ************************************ 00:03:58.411 23:41:29 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:58.411 23:41:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.411 23:41:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.411 23:41:29 -- common/autotest_common.sh@10 -- # set +x 00:03:58.668 ************************************ 00:03:58.668 START TEST skip_rpc 00:03:58.668 ************************************ 00:03:58.668 23:41:29 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:58.668 * Looking for test storage... 00:03:58.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:58.668 23:41:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:58.668 23:41:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:58.668 23:41:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:58.668 23:41:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:58.668 23:41:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:58.668 23:41:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:58.668 ************************************ 00:03:58.668 START TEST skip_rpc 00:03:58.668 ************************************ 00:03:58.668 23:41:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:58.668 23:41:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3250067 00:03:58.668 23:41:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:58.668 23:41:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:58.668 23:41:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:58.668 [2024-07-24 23:41:29.152624] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:03:58.668 [2024-07-24 23:41:29.152688] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3250067 ] 00:03:58.668 EAL: No free 2048 kB hugepages reported on node 1 00:03:58.668 [2024-07-24 23:41:29.208081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:58.926 [2024-07-24 23:41:29.320002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:04.184 23:41:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:04.184 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:04.184 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:04.184 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:04.184 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:04.184 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:04.184 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3250067 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 3250067 ']' 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 3250067 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3250067 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3250067' 00:04:04.185 killing process with pid 3250067 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 3250067 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 3250067 00:04:04.185 00:04:04.185 real 0m5.501s 00:04:04.185 user 0m5.196s 00:04:04.185 sys 0m0.312s 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:04.185 23:41:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.185 ************************************ 00:04:04.185 END TEST skip_rpc 00:04:04.185 ************************************ 00:04:04.185 23:41:34 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:04.185 23:41:34 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:04.185 23:41:34 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:04.185 23:41:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:04.185 ************************************ 00:04:04.185 START TEST skip_rpc_with_json 00:04:04.185 ************************************ 00:04:04.185 23:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:04.185 23:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:04.185 23:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3250755 00:04:04.185 23:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:04.185 23:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:04.185 23:41:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3250755 00:04:04.185 23:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 3250755 ']' 00:04:04.185 23:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:04.185 23:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:04.185 23:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:04.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:04.185 23:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:04.185 23:41:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:04.185 [2024-07-24 23:41:34.706877] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:04:04.185 [2024-07-24 23:41:34.706956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3250755 ] 00:04:04.185 EAL: No free 2048 kB hugepages reported on node 1 00:04:04.185 [2024-07-24 23:41:34.766887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:04.443 [2024-07-24 23:41:34.881112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.376 [2024-07-24 23:41:35.632574] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:05.376 request: 00:04:05.376 { 00:04:05.376 "trtype": "tcp", 00:04:05.376 "method": "nvmf_get_transports", 00:04:05.376 "req_id": 1 00:04:05.376 } 00:04:05.376 Got JSON-RPC error response 00:04:05.376 response: 00:04:05.376 { 00:04:05.376 "code": -19, 00:04:05.376 "message": "No such device" 00:04:05.376 } 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.376 [2024-07-24 23:41:35.640703] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:05.376 23:41:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:05.376 { 00:04:05.376 "subsystems": [ 00:04:05.376 { 00:04:05.376 "subsystem": "vfio_user_target", 00:04:05.376 "config": null 00:04:05.376 }, 00:04:05.376 { 00:04:05.376 "subsystem": "keyring", 00:04:05.376 "config": [] 00:04:05.376 }, 00:04:05.376 { 00:04:05.376 "subsystem": "iobuf", 00:04:05.376 "config": [ 00:04:05.376 { 00:04:05.376 "method": "iobuf_set_options", 00:04:05.376 "params": { 00:04:05.376 "small_pool_count": 8192, 00:04:05.376 "large_pool_count": 1024, 00:04:05.376 "small_bufsize": 8192, 00:04:05.376 "large_bufsize": 135168 00:04:05.376 } 00:04:05.376 } 00:04:05.376 ] 00:04:05.376 }, 00:04:05.376 { 00:04:05.376 "subsystem": "sock", 00:04:05.376 "config": [ 00:04:05.376 { 00:04:05.376 "method": "sock_set_default_impl", 00:04:05.376 "params": { 00:04:05.376 "impl_name": "posix" 00:04:05.376 } 00:04:05.376 }, 00:04:05.376 { 00:04:05.376 "method": "sock_impl_set_options", 00:04:05.376 "params": { 00:04:05.376 "impl_name": "ssl", 00:04:05.376 "recv_buf_size": 4096, 00:04:05.376 "send_buf_size": 4096, 00:04:05.376 "enable_recv_pipe": true, 00:04:05.377 "enable_quickack": false, 00:04:05.377 "enable_placement_id": 0, 00:04:05.377 "enable_zerocopy_send_server": true, 00:04:05.377 "enable_zerocopy_send_client": false, 00:04:05.377 "zerocopy_threshold": 0, 00:04:05.377 "tls_version": 0, 00:04:05.377 "enable_ktls": false 00:04:05.377 } 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "method": "sock_impl_set_options", 00:04:05.377 "params": { 00:04:05.377 "impl_name": "posix", 00:04:05.377 "recv_buf_size": 2097152, 00:04:05.377 "send_buf_size": 2097152, 00:04:05.377 "enable_recv_pipe": true, 00:04:05.377 "enable_quickack": false, 00:04:05.377 "enable_placement_id": 0, 00:04:05.377 "enable_zerocopy_send_server": true, 00:04:05.377 "enable_zerocopy_send_client": false, 00:04:05.377 "zerocopy_threshold": 0, 00:04:05.377 "tls_version": 0, 00:04:05.377 "enable_ktls": false 00:04:05.377 } 00:04:05.377 } 00:04:05.377 ] 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "subsystem": "vmd", 00:04:05.377 "config": [] 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "subsystem": "accel", 00:04:05.377 "config": [ 00:04:05.377 { 00:04:05.377 "method": "accel_set_options", 00:04:05.377 "params": { 00:04:05.377 "small_cache_size": 128, 00:04:05.377 "large_cache_size": 16, 00:04:05.377 "task_count": 2048, 00:04:05.377 "sequence_count": 2048, 00:04:05.377 "buf_count": 2048 00:04:05.377 } 00:04:05.377 } 00:04:05.377 ] 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "subsystem": "bdev", 00:04:05.377 "config": [ 00:04:05.377 { 00:04:05.377 "method": "bdev_set_options", 00:04:05.377 "params": { 00:04:05.377 "bdev_io_pool_size": 65535, 00:04:05.377 "bdev_io_cache_size": 256, 00:04:05.377 "bdev_auto_examine": true, 00:04:05.377 "iobuf_small_cache_size": 128, 00:04:05.377 "iobuf_large_cache_size": 16 00:04:05.377 } 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "method": "bdev_raid_set_options", 00:04:05.377 "params": { 00:04:05.377 "process_window_size_kb": 1024, 00:04:05.377 "process_max_bandwidth_mb_sec": 0 00:04:05.377 } 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "method": "bdev_iscsi_set_options", 00:04:05.377 "params": { 00:04:05.377 "timeout_sec": 30 00:04:05.377 } 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "method": "bdev_nvme_set_options", 00:04:05.377 "params": { 00:04:05.377 "action_on_timeout": "none", 00:04:05.377 "timeout_us": 0, 00:04:05.377 "timeout_admin_us": 0, 00:04:05.377 "keep_alive_timeout_ms": 10000, 00:04:05.377 "arbitration_burst": 0, 00:04:05.377 "low_priority_weight": 0, 00:04:05.377 "medium_priority_weight": 0, 00:04:05.377 "high_priority_weight": 0, 00:04:05.377 "nvme_adminq_poll_period_us": 10000, 00:04:05.377 "nvme_ioq_poll_period_us": 0, 00:04:05.377 "io_queue_requests": 0, 00:04:05.377 "delay_cmd_submit": true, 00:04:05.377 "transport_retry_count": 4, 00:04:05.377 "bdev_retry_count": 3, 00:04:05.377 "transport_ack_timeout": 0, 00:04:05.377 "ctrlr_loss_timeout_sec": 0, 00:04:05.377 "reconnect_delay_sec": 0, 00:04:05.377 "fast_io_fail_timeout_sec": 0, 00:04:05.377 "disable_auto_failback": false, 00:04:05.377 "generate_uuids": false, 00:04:05.377 "transport_tos": 0, 00:04:05.377 "nvme_error_stat": false, 00:04:05.377 "rdma_srq_size": 0, 00:04:05.377 "io_path_stat": false, 00:04:05.377 "allow_accel_sequence": false, 00:04:05.377 "rdma_max_cq_size": 0, 00:04:05.377 "rdma_cm_event_timeout_ms": 0, 00:04:05.377 "dhchap_digests": [ 00:04:05.377 "sha256", 00:04:05.377 "sha384", 00:04:05.377 "sha512" 00:04:05.377 ], 00:04:05.377 "dhchap_dhgroups": [ 00:04:05.377 "null", 00:04:05.377 "ffdhe2048", 00:04:05.377 "ffdhe3072", 00:04:05.377 "ffdhe4096", 00:04:05.377 "ffdhe6144", 00:04:05.377 "ffdhe8192" 00:04:05.377 ] 00:04:05.377 } 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "method": "bdev_nvme_set_hotplug", 00:04:05.377 "params": { 00:04:05.377 "period_us": 100000, 00:04:05.377 "enable": false 00:04:05.377 } 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "method": "bdev_wait_for_examine" 00:04:05.377 } 00:04:05.377 ] 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "subsystem": "scsi", 00:04:05.377 "config": null 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "subsystem": "scheduler", 00:04:05.377 "config": [ 00:04:05.377 { 00:04:05.377 "method": "framework_set_scheduler", 00:04:05.377 "params": { 00:04:05.377 "name": "static" 00:04:05.377 } 00:04:05.377 } 00:04:05.377 ] 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "subsystem": "vhost_scsi", 00:04:05.377 "config": [] 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "subsystem": "vhost_blk", 00:04:05.377 "config": [] 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "subsystem": "ublk", 00:04:05.377 "config": [] 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "subsystem": "nbd", 00:04:05.377 "config": [] 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "subsystem": "nvmf", 00:04:05.377 "config": [ 00:04:05.377 { 00:04:05.377 "method": "nvmf_set_config", 00:04:05.377 "params": { 00:04:05.377 "discovery_filter": "match_any", 00:04:05.377 "admin_cmd_passthru": { 00:04:05.377 "identify_ctrlr": false 00:04:05.377 } 00:04:05.377 } 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "method": "nvmf_set_max_subsystems", 00:04:05.377 "params": { 00:04:05.377 "max_subsystems": 1024 00:04:05.377 } 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "method": "nvmf_set_crdt", 00:04:05.377 "params": { 00:04:05.377 "crdt1": 0, 00:04:05.377 "crdt2": 0, 00:04:05.377 "crdt3": 0 00:04:05.377 } 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "method": "nvmf_create_transport", 00:04:05.377 "params": { 00:04:05.377 "trtype": "TCP", 00:04:05.377 "max_queue_depth": 128, 00:04:05.377 "max_io_qpairs_per_ctrlr": 127, 00:04:05.377 "in_capsule_data_size": 4096, 00:04:05.377 "max_io_size": 131072, 00:04:05.377 "io_unit_size": 131072, 00:04:05.377 "max_aq_depth": 128, 00:04:05.377 "num_shared_buffers": 511, 00:04:05.377 "buf_cache_size": 4294967295, 00:04:05.377 "dif_insert_or_strip": false, 00:04:05.377 "zcopy": false, 00:04:05.377 "c2h_success": true, 00:04:05.377 "sock_priority": 0, 00:04:05.377 "abort_timeout_sec": 1, 00:04:05.377 "ack_timeout": 0, 00:04:05.377 "data_wr_pool_size": 0 00:04:05.377 } 00:04:05.377 } 00:04:05.377 ] 00:04:05.377 }, 00:04:05.377 { 00:04:05.377 "subsystem": "iscsi", 00:04:05.377 "config": [ 00:04:05.377 { 00:04:05.377 "method": "iscsi_set_options", 00:04:05.377 "params": { 00:04:05.377 "node_base": "iqn.2016-06.io.spdk", 00:04:05.377 "max_sessions": 128, 00:04:05.377 "max_connections_per_session": 2, 00:04:05.377 "max_queue_depth": 64, 00:04:05.377 "default_time2wait": 2, 00:04:05.377 "default_time2retain": 20, 00:04:05.377 "first_burst_length": 8192, 00:04:05.377 "immediate_data": true, 00:04:05.377 "allow_duplicated_isid": false, 00:04:05.377 "error_recovery_level": 0, 00:04:05.377 "nop_timeout": 60, 00:04:05.377 "nop_in_interval": 30, 00:04:05.377 "disable_chap": false, 00:04:05.377 "require_chap": false, 00:04:05.377 "mutual_chap": false, 00:04:05.377 "chap_group": 0, 00:04:05.377 "max_large_datain_per_connection": 64, 00:04:05.377 "max_r2t_per_connection": 4, 00:04:05.377 "pdu_pool_size": 36864, 00:04:05.377 "immediate_data_pool_size": 16384, 00:04:05.377 "data_out_pool_size": 2048 00:04:05.377 } 00:04:05.377 } 00:04:05.377 ] 00:04:05.377 } 00:04:05.377 ] 00:04:05.377 } 00:04:05.377 23:41:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:05.377 23:41:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3250755 00:04:05.377 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3250755 ']' 00:04:05.377 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3250755 00:04:05.377 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:05.377 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:05.377 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3250755 00:04:05.377 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:05.377 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:05.377 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3250755' 00:04:05.377 killing process with pid 3250755 00:04:05.377 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3250755 00:04:05.377 23:41:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3250755 00:04:05.943 23:41:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3250984 00:04:05.943 23:41:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:05.943 23:41:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:11.204 23:41:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3250984 00:04:11.204 23:41:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 3250984 ']' 00:04:11.204 23:41:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 3250984 00:04:11.204 23:41:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:11.204 23:41:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:11.204 23:41:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3250984 00:04:11.204 23:41:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:11.204 23:41:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:11.204 23:41:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3250984' 00:04:11.204 killing process with pid 3250984 00:04:11.204 23:41:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 3250984 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 3250984 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:11.205 00:04:11.205 real 0m7.110s 00:04:11.205 user 0m6.882s 00:04:11.205 sys 0m0.724s 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:11.205 ************************************ 00:04:11.205 END TEST skip_rpc_with_json 00:04:11.205 ************************************ 00:04:11.205 23:41:41 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:11.205 23:41:41 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.205 23:41:41 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.205 23:41:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.205 ************************************ 00:04:11.205 START TEST skip_rpc_with_delay 00:04:11.205 ************************************ 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:11.205 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:11.463 [2024-07-24 23:41:41.859620] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:11.463 [2024-07-24 23:41:41.859732] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:11.463 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:11.463 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:11.463 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:11.463 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:11.463 00:04:11.463 real 0m0.068s 00:04:11.463 user 0m0.036s 00:04:11.463 sys 0m0.031s 00:04:11.463 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.463 23:41:41 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:11.463 ************************************ 00:04:11.463 END TEST skip_rpc_with_delay 00:04:11.463 ************************************ 00:04:11.463 23:41:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:11.463 23:41:41 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:11.463 23:41:41 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:11.463 23:41:41 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.463 23:41:41 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.463 23:41:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.463 ************************************ 00:04:11.463 START TEST exit_on_failed_rpc_init 00:04:11.463 ************************************ 00:04:11.463 23:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:11.464 23:41:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3251674 00:04:11.464 23:41:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:11.464 23:41:41 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3251674 00:04:11.464 23:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 3251674 ']' 00:04:11.464 23:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.464 23:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:11.464 23:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.464 23:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:11.464 23:41:41 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:11.464 [2024-07-24 23:41:41.971250] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:04:11.464 [2024-07-24 23:41:41.971348] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3251674 ] 00:04:11.464 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.464 [2024-07-24 23:41:42.028813] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.722 [2024-07-24 23:41:42.140410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:11.980 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:11.980 [2024-07-24 23:41:42.451759] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:04:11.980 [2024-07-24 23:41:42.451849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3251750 ] 00:04:11.980 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.980 [2024-07-24 23:41:42.512590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:12.239 [2024-07-24 23:41:42.634248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.239 [2024-07-24 23:41:42.634372] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:12.239 [2024-07-24 23:41:42.634391] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:12.239 [2024-07-24 23:41:42.634402] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3251674 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 3251674 ']' 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 3251674 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3251674 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3251674' 00:04:12.239 killing process with pid 3251674 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 3251674 00:04:12.239 23:41:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 3251674 00:04:12.804 00:04:12.804 real 0m1.328s 00:04:12.804 user 0m1.505s 00:04:12.804 sys 0m0.449s 00:04:12.804 23:41:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.804 23:41:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:12.804 ************************************ 00:04:12.804 END TEST exit_on_failed_rpc_init 00:04:12.804 ************************************ 00:04:12.804 23:41:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:12.804 00:04:12.804 real 0m14.246s 00:04:12.804 user 0m13.708s 00:04:12.804 sys 0m1.682s 00:04:12.804 23:41:43 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.804 23:41:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.804 ************************************ 00:04:12.804 END TEST skip_rpc 00:04:12.804 ************************************ 00:04:12.804 23:41:43 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:12.804 23:41:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.804 23:41:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.804 23:41:43 -- common/autotest_common.sh@10 -- # set +x 00:04:12.804 ************************************ 00:04:12.804 START TEST rpc_client 00:04:12.804 ************************************ 00:04:12.804 23:41:43 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:12.804 * Looking for test storage... 00:04:12.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:12.804 23:41:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:12.804 OK 00:04:12.804 23:41:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:12.804 00:04:12.805 real 0m0.068s 00:04:12.805 user 0m0.032s 00:04:12.805 sys 0m0.041s 00:04:12.805 23:41:43 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.805 23:41:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:12.805 ************************************ 00:04:12.805 END TEST rpc_client 00:04:12.805 ************************************ 00:04:12.805 23:41:43 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:12.805 23:41:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.805 23:41:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.805 23:41:43 -- common/autotest_common.sh@10 -- # set +x 00:04:13.063 ************************************ 00:04:13.063 START TEST json_config 00:04:13.063 ************************************ 00:04:13.063 23:41:43 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:13.063 23:41:43 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:13.063 23:41:43 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:13.063 23:41:43 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:13.063 23:41:43 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.063 23:41:43 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.063 23:41:43 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.063 23:41:43 json_config -- paths/export.sh@5 -- # export PATH 00:04:13.063 23:41:43 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@47 -- # : 0 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:13.063 23:41:43 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:13.063 INFO: JSON configuration test init 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:13.063 23:41:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.063 23:41:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:13.063 23:41:43 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:13.063 23:41:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.063 23:41:43 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:13.063 23:41:43 json_config -- json_config/common.sh@9 -- # local app=target 00:04:13.063 23:41:43 json_config -- json_config/common.sh@10 -- # shift 00:04:13.063 23:41:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:13.063 23:41:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:13.063 23:41:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:13.063 23:41:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.063 23:41:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:13.064 23:41:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3251990 00:04:13.064 23:41:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:13.064 23:41:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:13.064 Waiting for target to run... 00:04:13.064 23:41:43 json_config -- json_config/common.sh@25 -- # waitforlisten 3251990 /var/tmp/spdk_tgt.sock 00:04:13.064 23:41:43 json_config -- common/autotest_common.sh@829 -- # '[' -z 3251990 ']' 00:04:13.064 23:41:43 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:13.064 23:41:43 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:13.064 23:41:43 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:13.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:13.064 23:41:43 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:13.064 23:41:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.064 [2024-07-24 23:41:43.532998] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:04:13.064 [2024-07-24 23:41:43.533083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3251990 ] 00:04:13.064 EAL: No free 2048 kB hugepages reported on node 1 00:04:13.321 [2024-07-24 23:41:43.871587] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.579 [2024-07-24 23:41:43.961103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.145 23:41:44 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:14.145 23:41:44 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:14.145 23:41:44 json_config -- json_config/common.sh@26 -- # echo '' 00:04:14.145 00:04:14.145 23:41:44 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:14.145 23:41:44 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:14.145 23:41:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:14.145 23:41:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.145 23:41:44 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:14.145 23:41:44 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:14.145 23:41:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:14.145 23:41:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:14.145 23:41:44 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:14.145 23:41:44 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:14.145 23:41:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:17.455 23:41:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.455 23:41:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:17.455 23:41:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@51 -- # sort 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:17.455 23:41:47 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:17.455 23:41:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:17.455 23:41:47 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:17.456 23:41:47 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:17.456 23:41:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.456 23:41:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:17.456 23:41:47 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:17.456 23:41:47 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:17.456 23:41:47 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:17.456 23:41:47 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:17.456 23:41:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:17.713 MallocForNvmf0 00:04:17.713 23:41:48 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:17.713 23:41:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:17.971 MallocForNvmf1 00:04:17.971 23:41:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:17.971 23:41:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:18.230 [2024-07-24 23:41:48.642649] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:18.230 23:41:48 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:18.230 23:41:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:18.488 23:41:48 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:18.488 23:41:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:18.746 23:41:49 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:18.746 23:41:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:19.003 23:41:49 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:19.003 23:41:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:19.003 [2024-07-24 23:41:49.614088] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:19.261 23:41:49 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:19.261 23:41:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:19.261 23:41:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.261 23:41:49 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:19.261 23:41:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:19.261 23:41:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.261 23:41:49 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:19.261 23:41:49 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:19.261 23:41:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:19.519 MallocBdevForConfigChangeCheck 00:04:19.519 23:41:49 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:19.519 23:41:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:19.519 23:41:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:19.519 23:41:49 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:19.519 23:41:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:19.777 23:41:50 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:19.777 INFO: shutting down applications... 00:04:19.777 23:41:50 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:19.777 23:41:50 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:19.777 23:41:50 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:19.777 23:41:50 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:21.675 Calling clear_iscsi_subsystem 00:04:21.675 Calling clear_nvmf_subsystem 00:04:21.675 Calling clear_nbd_subsystem 00:04:21.675 Calling clear_ublk_subsystem 00:04:21.675 Calling clear_vhost_blk_subsystem 00:04:21.675 Calling clear_vhost_scsi_subsystem 00:04:21.675 Calling clear_bdev_subsystem 00:04:21.675 23:41:52 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:21.675 23:41:52 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:21.675 23:41:52 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:21.675 23:41:52 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:21.675 23:41:52 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:21.675 23:41:52 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:21.933 23:41:52 json_config -- json_config/json_config.sh@349 -- # break 00:04:21.933 23:41:52 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:21.933 23:41:52 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:21.933 23:41:52 json_config -- json_config/common.sh@31 -- # local app=target 00:04:21.933 23:41:52 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:21.933 23:41:52 json_config -- json_config/common.sh@35 -- # [[ -n 3251990 ]] 00:04:21.933 23:41:52 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3251990 00:04:21.933 23:41:52 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:21.933 23:41:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:21.933 23:41:52 json_config -- json_config/common.sh@41 -- # kill -0 3251990 00:04:21.933 23:41:52 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:22.500 23:41:52 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:22.500 23:41:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:22.500 23:41:52 json_config -- json_config/common.sh@41 -- # kill -0 3251990 00:04:22.500 23:41:52 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:22.500 23:41:52 json_config -- json_config/common.sh@43 -- # break 00:04:22.500 23:41:52 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:22.500 23:41:52 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:22.500 SPDK target shutdown done 00:04:22.500 23:41:52 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:22.500 INFO: relaunching applications... 00:04:22.500 23:41:52 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.500 23:41:52 json_config -- json_config/common.sh@9 -- # local app=target 00:04:22.500 23:41:52 json_config -- json_config/common.sh@10 -- # shift 00:04:22.500 23:41:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.500 23:41:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.500 23:41:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.500 23:41:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.500 23:41:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.500 23:41:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3253199 00:04:22.500 23:41:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:22.500 23:41:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.500 Waiting for target to run... 00:04:22.500 23:41:52 json_config -- json_config/common.sh@25 -- # waitforlisten 3253199 /var/tmp/spdk_tgt.sock 00:04:22.500 23:41:52 json_config -- common/autotest_common.sh@829 -- # '[' -z 3253199 ']' 00:04:22.500 23:41:52 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.500 23:41:52 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:22.500 23:41:52 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.500 23:41:52 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:22.500 23:41:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.500 [2024-07-24 23:41:52.965123] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:04:22.500 [2024-07-24 23:41:52.965217] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3253199 ] 00:04:22.500 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.065 [2024-07-24 23:41:53.463772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.065 [2024-07-24 23:41:53.567090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.397 [2024-07-24 23:41:56.612017] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.397 [2024-07-24 23:41:56.644500] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:26.963 23:41:57 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:26.963 23:41:57 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:26.963 23:41:57 json_config -- json_config/common.sh@26 -- # echo '' 00:04:26.963 00:04:26.963 23:41:57 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:26.963 23:41:57 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:26.963 INFO: Checking if target configuration is the same... 00:04:26.963 23:41:57 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.963 23:41:57 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:26.963 23:41:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:26.963 + '[' 2 -ne 2 ']' 00:04:26.963 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:26.963 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:26.963 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:26.963 +++ basename /dev/fd/62 00:04:26.963 ++ mktemp /tmp/62.XXX 00:04:26.963 + tmp_file_1=/tmp/62.F0r 00:04:26.963 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:26.963 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:26.963 + tmp_file_2=/tmp/spdk_tgt_config.json.bS1 00:04:26.963 + ret=0 00:04:26.963 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:27.220 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:27.220 + diff -u /tmp/62.F0r /tmp/spdk_tgt_config.json.bS1 00:04:27.220 + echo 'INFO: JSON config files are the same' 00:04:27.220 INFO: JSON config files are the same 00:04:27.220 + rm /tmp/62.F0r /tmp/spdk_tgt_config.json.bS1 00:04:27.220 + exit 0 00:04:27.220 23:41:57 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:27.220 23:41:57 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:27.220 INFO: changing configuration and checking if this can be detected... 00:04:27.220 23:41:57 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:27.220 23:41:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:27.478 23:41:58 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.478 23:41:58 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:27.478 23:41:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:27.478 + '[' 2 -ne 2 ']' 00:04:27.478 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:27.478 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:27.478 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:27.478 +++ basename /dev/fd/62 00:04:27.478 ++ mktemp /tmp/62.XXX 00:04:27.478 + tmp_file_1=/tmp/62.XSn 00:04:27.478 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:27.478 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:27.478 + tmp_file_2=/tmp/spdk_tgt_config.json.SOl 00:04:27.478 + ret=0 00:04:27.478 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:28.044 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:28.044 + diff -u /tmp/62.XSn /tmp/spdk_tgt_config.json.SOl 00:04:28.044 + ret=1 00:04:28.044 + echo '=== Start of file: /tmp/62.XSn ===' 00:04:28.044 + cat /tmp/62.XSn 00:04:28.044 + echo '=== End of file: /tmp/62.XSn ===' 00:04:28.044 + echo '' 00:04:28.044 + echo '=== Start of file: /tmp/spdk_tgt_config.json.SOl ===' 00:04:28.044 + cat /tmp/spdk_tgt_config.json.SOl 00:04:28.044 + echo '=== End of file: /tmp/spdk_tgt_config.json.SOl ===' 00:04:28.044 + echo '' 00:04:28.044 + rm /tmp/62.XSn /tmp/spdk_tgt_config.json.SOl 00:04:28.044 + exit 1 00:04:28.044 23:41:58 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:28.044 INFO: configuration change detected. 00:04:28.044 23:41:58 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:28.044 23:41:58 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:28.044 23:41:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.044 23:41:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.044 23:41:58 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:28.044 23:41:58 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:28.044 23:41:58 json_config -- json_config/json_config.sh@321 -- # [[ -n 3253199 ]] 00:04:28.044 23:41:58 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:28.044 23:41:58 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:28.044 23:41:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.045 23:41:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.045 23:41:58 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:28.045 23:41:58 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:28.045 23:41:58 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:28.045 23:41:58 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:28.045 23:41:58 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:28.045 23:41:58 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:28.045 23:41:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:28.045 23:41:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.045 23:41:58 json_config -- json_config/json_config.sh@327 -- # killprocess 3253199 00:04:28.045 23:41:58 json_config -- common/autotest_common.sh@948 -- # '[' -z 3253199 ']' 00:04:28.045 23:41:58 json_config -- common/autotest_common.sh@952 -- # kill -0 3253199 00:04:28.045 23:41:58 json_config -- common/autotest_common.sh@953 -- # uname 00:04:28.045 23:41:58 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:28.045 23:41:58 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3253199 00:04:28.045 23:41:58 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:28.045 23:41:58 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:28.045 23:41:58 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3253199' 00:04:28.045 killing process with pid 3253199 00:04:28.045 23:41:58 json_config -- common/autotest_common.sh@967 -- # kill 3253199 00:04:28.045 23:41:58 json_config -- common/autotest_common.sh@972 -- # wait 3253199 00:04:29.941 23:42:00 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:29.941 23:42:00 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:29.941 23:42:00 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:29.941 23:42:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.941 23:42:00 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:29.942 23:42:00 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:29.942 INFO: Success 00:04:29.942 00:04:29.942 real 0m16.778s 00:04:29.942 user 0m18.804s 00:04:29.942 sys 0m2.005s 00:04:29.942 23:42:00 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.942 23:42:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:29.942 ************************************ 00:04:29.942 END TEST json_config 00:04:29.942 ************************************ 00:04:29.942 23:42:00 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:29.942 23:42:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.942 23:42:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.942 23:42:00 -- common/autotest_common.sh@10 -- # set +x 00:04:29.942 ************************************ 00:04:29.942 START TEST json_config_extra_key 00:04:29.942 ************************************ 00:04:29.942 23:42:00 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:29.942 23:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:29.942 23:42:00 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:29.942 23:42:00 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:29.942 23:42:00 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:29.942 23:42:00 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.942 23:42:00 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.942 23:42:00 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.942 23:42:00 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:29.942 23:42:00 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:29.942 23:42:00 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:29.942 23:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:29.942 23:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:29.942 23:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:29.942 23:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:29.942 23:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:29.942 23:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:29.942 23:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:29.942 23:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:29.942 23:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:29.942 23:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:29.942 23:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:29.942 INFO: launching applications... 00:04:29.942 23:42:00 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:29.942 23:42:00 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:29.942 23:42:00 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:29.942 23:42:00 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:29.942 23:42:00 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:29.942 23:42:00 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:29.942 23:42:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:29.942 23:42:00 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:29.942 23:42:00 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3254263 00:04:29.942 23:42:00 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:29.942 23:42:00 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:29.942 Waiting for target to run... 00:04:29.942 23:42:00 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3254263 /var/tmp/spdk_tgt.sock 00:04:29.942 23:42:00 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 3254263 ']' 00:04:29.942 23:42:00 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:29.942 23:42:00 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:29.942 23:42:00 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:29.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:29.942 23:42:00 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:29.942 23:42:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:29.942 [2024-07-24 23:42:00.347865] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:04:29.942 [2024-07-24 23:42:00.347946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254263 ] 00:04:29.942 EAL: No free 2048 kB hugepages reported on node 1 00:04:30.200 [2024-07-24 23:42:00.695386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.200 [2024-07-24 23:42:00.794947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.765 23:42:01 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:30.765 23:42:01 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:30.765 23:42:01 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:30.765 00:04:30.765 23:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:30.765 INFO: shutting down applications... 00:04:30.765 23:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:30.765 23:42:01 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:30.765 23:42:01 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:30.765 23:42:01 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3254263 ]] 00:04:30.765 23:42:01 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3254263 00:04:30.765 23:42:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:30.765 23:42:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:30.765 23:42:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3254263 00:04:30.765 23:42:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:31.329 23:42:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:31.329 23:42:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.329 23:42:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3254263 00:04:31.329 23:42:01 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:31.329 23:42:01 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:31.329 23:42:01 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:31.329 23:42:01 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:31.329 SPDK target shutdown done 00:04:31.329 23:42:01 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:31.329 Success 00:04:31.329 00:04:31.329 real 0m1.542s 00:04:31.329 user 0m1.536s 00:04:31.329 sys 0m0.440s 00:04:31.329 23:42:01 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.329 23:42:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:31.329 ************************************ 00:04:31.329 END TEST json_config_extra_key 00:04:31.329 ************************************ 00:04:31.329 23:42:01 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:31.329 23:42:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.329 23:42:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.329 23:42:01 -- common/autotest_common.sh@10 -- # set +x 00:04:31.329 ************************************ 00:04:31.329 START TEST alias_rpc 00:04:31.329 ************************************ 00:04:31.329 23:42:01 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:31.329 * Looking for test storage... 00:04:31.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:31.329 23:42:01 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:31.329 23:42:01 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3254523 00:04:31.329 23:42:01 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:31.329 23:42:01 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3254523 00:04:31.329 23:42:01 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 3254523 ']' 00:04:31.329 23:42:01 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.329 23:42:01 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.329 23:42:01 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.329 23:42:01 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.329 23:42:01 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:31.329 [2024-07-24 23:42:01.935466] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:04:31.329 [2024-07-24 23:42:01.935579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254523 ] 00:04:31.586 EAL: No free 2048 kB hugepages reported on node 1 00:04:31.586 [2024-07-24 23:42:01.995577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.586 [2024-07-24 23:42:02.101066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.843 23:42:02 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:31.843 23:42:02 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:31.843 23:42:02 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:32.101 23:42:02 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3254523 00:04:32.101 23:42:02 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 3254523 ']' 00:04:32.101 23:42:02 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 3254523 00:04:32.101 23:42:02 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:32.101 23:42:02 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.101 23:42:02 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3254523 00:04:32.101 23:42:02 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.101 23:42:02 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.101 23:42:02 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3254523' 00:04:32.101 killing process with pid 3254523 00:04:32.101 23:42:02 alias_rpc -- common/autotest_common.sh@967 -- # kill 3254523 00:04:32.101 23:42:02 alias_rpc -- common/autotest_common.sh@972 -- # wait 3254523 00:04:32.664 00:04:32.664 real 0m1.301s 00:04:32.664 user 0m1.351s 00:04:32.664 sys 0m0.442s 00:04:32.664 23:42:03 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.664 23:42:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:32.664 ************************************ 00:04:32.664 END TEST alias_rpc 00:04:32.664 ************************************ 00:04:32.665 23:42:03 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:32.665 23:42:03 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:32.665 23:42:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.665 23:42:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.665 23:42:03 -- common/autotest_common.sh@10 -- # set +x 00:04:32.665 ************************************ 00:04:32.665 START TEST spdkcli_tcp 00:04:32.665 ************************************ 00:04:32.665 23:42:03 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:32.665 * Looking for test storage... 00:04:32.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:32.665 23:42:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:32.665 23:42:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:32.665 23:42:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:32.665 23:42:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:32.665 23:42:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:32.665 23:42:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:32.665 23:42:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:32.665 23:42:03 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:32.665 23:42:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.665 23:42:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3254725 00:04:32.665 23:42:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:32.665 23:42:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3254725 00:04:32.665 23:42:03 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 3254725 ']' 00:04:32.665 23:42:03 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.665 23:42:03 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:32.665 23:42:03 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.665 23:42:03 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:32.665 23:42:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:32.922 [2024-07-24 23:42:03.292586] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:04:32.922 [2024-07-24 23:42:03.292700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254725 ] 00:04:32.922 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.922 [2024-07-24 23:42:03.352669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:32.922 [2024-07-24 23:42:03.463197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.922 [2024-07-24 23:42:03.463202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.180 23:42:03 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:33.180 23:42:03 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:33.180 23:42:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3254854 00:04:33.180 23:42:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:33.180 23:42:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:33.439 [ 00:04:33.439 "bdev_malloc_delete", 00:04:33.439 "bdev_malloc_create", 00:04:33.439 "bdev_null_resize", 00:04:33.439 "bdev_null_delete", 00:04:33.439 "bdev_null_create", 00:04:33.439 "bdev_nvme_cuse_unregister", 00:04:33.439 "bdev_nvme_cuse_register", 00:04:33.439 "bdev_opal_new_user", 00:04:33.439 "bdev_opal_set_lock_state", 00:04:33.439 "bdev_opal_delete", 00:04:33.439 "bdev_opal_get_info", 00:04:33.439 "bdev_opal_create", 00:04:33.439 "bdev_nvme_opal_revert", 00:04:33.439 "bdev_nvme_opal_init", 00:04:33.439 "bdev_nvme_send_cmd", 00:04:33.439 "bdev_nvme_get_path_iostat", 00:04:33.439 "bdev_nvme_get_mdns_discovery_info", 00:04:33.439 "bdev_nvme_stop_mdns_discovery", 00:04:33.439 "bdev_nvme_start_mdns_discovery", 00:04:33.439 "bdev_nvme_set_multipath_policy", 00:04:33.439 "bdev_nvme_set_preferred_path", 00:04:33.439 "bdev_nvme_get_io_paths", 00:04:33.439 "bdev_nvme_remove_error_injection", 00:04:33.439 "bdev_nvme_add_error_injection", 00:04:33.439 "bdev_nvme_get_discovery_info", 00:04:33.439 "bdev_nvme_stop_discovery", 00:04:33.439 "bdev_nvme_start_discovery", 00:04:33.439 "bdev_nvme_get_controller_health_info", 00:04:33.439 "bdev_nvme_disable_controller", 00:04:33.439 "bdev_nvme_enable_controller", 00:04:33.439 "bdev_nvme_reset_controller", 00:04:33.439 "bdev_nvme_get_transport_statistics", 00:04:33.439 "bdev_nvme_apply_firmware", 00:04:33.439 "bdev_nvme_detach_controller", 00:04:33.439 "bdev_nvme_get_controllers", 00:04:33.439 "bdev_nvme_attach_controller", 00:04:33.439 "bdev_nvme_set_hotplug", 00:04:33.439 "bdev_nvme_set_options", 00:04:33.439 "bdev_passthru_delete", 00:04:33.439 "bdev_passthru_create", 00:04:33.439 "bdev_lvol_set_parent_bdev", 00:04:33.439 "bdev_lvol_set_parent", 00:04:33.439 "bdev_lvol_check_shallow_copy", 00:04:33.439 "bdev_lvol_start_shallow_copy", 00:04:33.439 "bdev_lvol_grow_lvstore", 00:04:33.439 "bdev_lvol_get_lvols", 00:04:33.439 "bdev_lvol_get_lvstores", 00:04:33.439 "bdev_lvol_delete", 00:04:33.439 "bdev_lvol_set_read_only", 00:04:33.439 "bdev_lvol_resize", 00:04:33.439 "bdev_lvol_decouple_parent", 00:04:33.439 "bdev_lvol_inflate", 00:04:33.439 "bdev_lvol_rename", 00:04:33.439 "bdev_lvol_clone_bdev", 00:04:33.439 "bdev_lvol_clone", 00:04:33.439 "bdev_lvol_snapshot", 00:04:33.439 "bdev_lvol_create", 00:04:33.439 "bdev_lvol_delete_lvstore", 00:04:33.439 "bdev_lvol_rename_lvstore", 00:04:33.439 "bdev_lvol_create_lvstore", 00:04:33.439 "bdev_raid_set_options", 00:04:33.439 "bdev_raid_remove_base_bdev", 00:04:33.439 "bdev_raid_add_base_bdev", 00:04:33.439 "bdev_raid_delete", 00:04:33.439 "bdev_raid_create", 00:04:33.439 "bdev_raid_get_bdevs", 00:04:33.439 "bdev_error_inject_error", 00:04:33.439 "bdev_error_delete", 00:04:33.439 "bdev_error_create", 00:04:33.439 "bdev_split_delete", 00:04:33.439 "bdev_split_create", 00:04:33.439 "bdev_delay_delete", 00:04:33.439 "bdev_delay_create", 00:04:33.439 "bdev_delay_update_latency", 00:04:33.439 "bdev_zone_block_delete", 00:04:33.439 "bdev_zone_block_create", 00:04:33.439 "blobfs_create", 00:04:33.439 "blobfs_detect", 00:04:33.439 "blobfs_set_cache_size", 00:04:33.439 "bdev_aio_delete", 00:04:33.439 "bdev_aio_rescan", 00:04:33.439 "bdev_aio_create", 00:04:33.439 "bdev_ftl_set_property", 00:04:33.439 "bdev_ftl_get_properties", 00:04:33.439 "bdev_ftl_get_stats", 00:04:33.439 "bdev_ftl_unmap", 00:04:33.439 "bdev_ftl_unload", 00:04:33.439 "bdev_ftl_delete", 00:04:33.439 "bdev_ftl_load", 00:04:33.439 "bdev_ftl_create", 00:04:33.439 "bdev_virtio_attach_controller", 00:04:33.439 "bdev_virtio_scsi_get_devices", 00:04:33.439 "bdev_virtio_detach_controller", 00:04:33.439 "bdev_virtio_blk_set_hotplug", 00:04:33.439 "bdev_iscsi_delete", 00:04:33.439 "bdev_iscsi_create", 00:04:33.439 "bdev_iscsi_set_options", 00:04:33.439 "accel_error_inject_error", 00:04:33.439 "ioat_scan_accel_module", 00:04:33.439 "dsa_scan_accel_module", 00:04:33.439 "iaa_scan_accel_module", 00:04:33.439 "vfu_virtio_create_scsi_endpoint", 00:04:33.439 "vfu_virtio_scsi_remove_target", 00:04:33.439 "vfu_virtio_scsi_add_target", 00:04:33.439 "vfu_virtio_create_blk_endpoint", 00:04:33.439 "vfu_virtio_delete_endpoint", 00:04:33.439 "keyring_file_remove_key", 00:04:33.439 "keyring_file_add_key", 00:04:33.439 "keyring_linux_set_options", 00:04:33.439 "iscsi_get_histogram", 00:04:33.439 "iscsi_enable_histogram", 00:04:33.439 "iscsi_set_options", 00:04:33.439 "iscsi_get_auth_groups", 00:04:33.439 "iscsi_auth_group_remove_secret", 00:04:33.439 "iscsi_auth_group_add_secret", 00:04:33.439 "iscsi_delete_auth_group", 00:04:33.439 "iscsi_create_auth_group", 00:04:33.439 "iscsi_set_discovery_auth", 00:04:33.439 "iscsi_get_options", 00:04:33.439 "iscsi_target_node_request_logout", 00:04:33.439 "iscsi_target_node_set_redirect", 00:04:33.439 "iscsi_target_node_set_auth", 00:04:33.439 "iscsi_target_node_add_lun", 00:04:33.439 "iscsi_get_stats", 00:04:33.439 "iscsi_get_connections", 00:04:33.439 "iscsi_portal_group_set_auth", 00:04:33.439 "iscsi_start_portal_group", 00:04:33.439 "iscsi_delete_portal_group", 00:04:33.439 "iscsi_create_portal_group", 00:04:33.439 "iscsi_get_portal_groups", 00:04:33.439 "iscsi_delete_target_node", 00:04:33.439 "iscsi_target_node_remove_pg_ig_maps", 00:04:33.439 "iscsi_target_node_add_pg_ig_maps", 00:04:33.439 "iscsi_create_target_node", 00:04:33.439 "iscsi_get_target_nodes", 00:04:33.439 "iscsi_delete_initiator_group", 00:04:33.439 "iscsi_initiator_group_remove_initiators", 00:04:33.439 "iscsi_initiator_group_add_initiators", 00:04:33.439 "iscsi_create_initiator_group", 00:04:33.439 "iscsi_get_initiator_groups", 00:04:33.439 "nvmf_set_crdt", 00:04:33.439 "nvmf_set_config", 00:04:33.439 "nvmf_set_max_subsystems", 00:04:33.439 "nvmf_stop_mdns_prr", 00:04:33.439 "nvmf_publish_mdns_prr", 00:04:33.439 "nvmf_subsystem_get_listeners", 00:04:33.439 "nvmf_subsystem_get_qpairs", 00:04:33.439 "nvmf_subsystem_get_controllers", 00:04:33.439 "nvmf_get_stats", 00:04:33.439 "nvmf_get_transports", 00:04:33.439 "nvmf_create_transport", 00:04:33.439 "nvmf_get_targets", 00:04:33.439 "nvmf_delete_target", 00:04:33.439 "nvmf_create_target", 00:04:33.439 "nvmf_subsystem_allow_any_host", 00:04:33.439 "nvmf_subsystem_remove_host", 00:04:33.439 "nvmf_subsystem_add_host", 00:04:33.439 "nvmf_ns_remove_host", 00:04:33.439 "nvmf_ns_add_host", 00:04:33.439 "nvmf_subsystem_remove_ns", 00:04:33.439 "nvmf_subsystem_add_ns", 00:04:33.439 "nvmf_subsystem_listener_set_ana_state", 00:04:33.439 "nvmf_discovery_get_referrals", 00:04:33.439 "nvmf_discovery_remove_referral", 00:04:33.439 "nvmf_discovery_add_referral", 00:04:33.439 "nvmf_subsystem_remove_listener", 00:04:33.439 "nvmf_subsystem_add_listener", 00:04:33.439 "nvmf_delete_subsystem", 00:04:33.439 "nvmf_create_subsystem", 00:04:33.439 "nvmf_get_subsystems", 00:04:33.439 "env_dpdk_get_mem_stats", 00:04:33.439 "nbd_get_disks", 00:04:33.439 "nbd_stop_disk", 00:04:33.439 "nbd_start_disk", 00:04:33.439 "ublk_recover_disk", 00:04:33.439 "ublk_get_disks", 00:04:33.439 "ublk_stop_disk", 00:04:33.439 "ublk_start_disk", 00:04:33.439 "ublk_destroy_target", 00:04:33.439 "ublk_create_target", 00:04:33.439 "virtio_blk_create_transport", 00:04:33.439 "virtio_blk_get_transports", 00:04:33.439 "vhost_controller_set_coalescing", 00:04:33.439 "vhost_get_controllers", 00:04:33.439 "vhost_delete_controller", 00:04:33.439 "vhost_create_blk_controller", 00:04:33.439 "vhost_scsi_controller_remove_target", 00:04:33.439 "vhost_scsi_controller_add_target", 00:04:33.439 "vhost_start_scsi_controller", 00:04:33.439 "vhost_create_scsi_controller", 00:04:33.439 "thread_set_cpumask", 00:04:33.439 "framework_get_governor", 00:04:33.439 "framework_get_scheduler", 00:04:33.439 "framework_set_scheduler", 00:04:33.439 "framework_get_reactors", 00:04:33.439 "thread_get_io_channels", 00:04:33.439 "thread_get_pollers", 00:04:33.439 "thread_get_stats", 00:04:33.439 "framework_monitor_context_switch", 00:04:33.439 "spdk_kill_instance", 00:04:33.439 "log_enable_timestamps", 00:04:33.439 "log_get_flags", 00:04:33.439 "log_clear_flag", 00:04:33.439 "log_set_flag", 00:04:33.439 "log_get_level", 00:04:33.439 "log_set_level", 00:04:33.439 "log_get_print_level", 00:04:33.439 "log_set_print_level", 00:04:33.439 "framework_enable_cpumask_locks", 00:04:33.440 "framework_disable_cpumask_locks", 00:04:33.440 "framework_wait_init", 00:04:33.440 "framework_start_init", 00:04:33.440 "scsi_get_devices", 00:04:33.440 "bdev_get_histogram", 00:04:33.440 "bdev_enable_histogram", 00:04:33.440 "bdev_set_qos_limit", 00:04:33.440 "bdev_set_qd_sampling_period", 00:04:33.440 "bdev_get_bdevs", 00:04:33.440 "bdev_reset_iostat", 00:04:33.440 "bdev_get_iostat", 00:04:33.440 "bdev_examine", 00:04:33.440 "bdev_wait_for_examine", 00:04:33.440 "bdev_set_options", 00:04:33.440 "notify_get_notifications", 00:04:33.440 "notify_get_types", 00:04:33.440 "accel_get_stats", 00:04:33.440 "accel_set_options", 00:04:33.440 "accel_set_driver", 00:04:33.440 "accel_crypto_key_destroy", 00:04:33.440 "accel_crypto_keys_get", 00:04:33.440 "accel_crypto_key_create", 00:04:33.440 "accel_assign_opc", 00:04:33.440 "accel_get_module_info", 00:04:33.440 "accel_get_opc_assignments", 00:04:33.440 "vmd_rescan", 00:04:33.440 "vmd_remove_device", 00:04:33.440 "vmd_enable", 00:04:33.440 "sock_get_default_impl", 00:04:33.440 "sock_set_default_impl", 00:04:33.440 "sock_impl_set_options", 00:04:33.440 "sock_impl_get_options", 00:04:33.440 "iobuf_get_stats", 00:04:33.440 "iobuf_set_options", 00:04:33.440 "keyring_get_keys", 00:04:33.440 "framework_get_pci_devices", 00:04:33.440 "framework_get_config", 00:04:33.440 "framework_get_subsystems", 00:04:33.440 "vfu_tgt_set_base_path", 00:04:33.440 "trace_get_info", 00:04:33.440 "trace_get_tpoint_group_mask", 00:04:33.440 "trace_disable_tpoint_group", 00:04:33.440 "trace_enable_tpoint_group", 00:04:33.440 "trace_clear_tpoint_mask", 00:04:33.440 "trace_set_tpoint_mask", 00:04:33.440 "spdk_get_version", 00:04:33.440 "rpc_get_methods" 00:04:33.440 ] 00:04:33.440 23:42:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:33.440 23:42:03 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:33.440 23:42:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:33.440 23:42:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:33.440 23:42:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3254725 00:04:33.440 23:42:04 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 3254725 ']' 00:04:33.440 23:42:04 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 3254725 00:04:33.440 23:42:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:33.440 23:42:04 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:33.440 23:42:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3254725 00:04:33.440 23:42:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:33.440 23:42:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:33.440 23:42:04 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3254725' 00:04:33.440 killing process with pid 3254725 00:04:33.440 23:42:04 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 3254725 00:04:33.440 23:42:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 3254725 00:04:34.005 00:04:34.005 real 0m1.316s 00:04:34.005 user 0m2.300s 00:04:34.005 sys 0m0.451s 00:04:34.005 23:42:04 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.005 23:42:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:34.005 ************************************ 00:04:34.005 END TEST spdkcli_tcp 00:04:34.005 ************************************ 00:04:34.005 23:42:04 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:34.005 23:42:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.005 23:42:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.005 23:42:04 -- common/autotest_common.sh@10 -- # set +x 00:04:34.005 ************************************ 00:04:34.005 START TEST dpdk_mem_utility 00:04:34.005 ************************************ 00:04:34.005 23:42:04 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:34.005 * Looking for test storage... 00:04:34.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:34.005 23:42:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:34.005 23:42:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3255044 00:04:34.005 23:42:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.005 23:42:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3255044 00:04:34.005 23:42:04 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 3255044 ']' 00:04:34.005 23:42:04 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.005 23:42:04 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:34.005 23:42:04 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.005 23:42:04 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:34.005 23:42:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:34.264 [2024-07-24 23:42:04.644362] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:04:34.264 [2024-07-24 23:42:04.644462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255044 ] 00:04:34.264 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.264 [2024-07-24 23:42:04.711629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.264 [2024-07-24 23:42:04.833227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.198 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:35.198 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:35.198 23:42:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:35.198 23:42:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:35.198 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:35.198 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:35.198 { 00:04:35.198 "filename": "/tmp/spdk_mem_dump.txt" 00:04:35.198 } 00:04:35.198 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:35.198 23:42:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:35.198 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:35.198 1 heaps totaling size 814.000000 MiB 00:04:35.198 size: 814.000000 MiB heap id: 0 00:04:35.198 end heaps---------- 00:04:35.198 8 mempools totaling size 598.116089 MiB 00:04:35.198 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:35.198 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:35.198 size: 84.521057 MiB name: bdev_io_3255044 00:04:35.198 size: 51.011292 MiB name: evtpool_3255044 00:04:35.198 size: 50.003479 MiB name: msgpool_3255044 00:04:35.198 size: 21.763794 MiB name: PDU_Pool 00:04:35.198 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:35.198 size: 0.026123 MiB name: Session_Pool 00:04:35.198 end mempools------- 00:04:35.198 6 memzones totaling size 4.142822 MiB 00:04:35.198 size: 1.000366 MiB name: RG_ring_0_3255044 00:04:35.198 size: 1.000366 MiB name: RG_ring_1_3255044 00:04:35.198 size: 1.000366 MiB name: RG_ring_4_3255044 00:04:35.198 size: 1.000366 MiB name: RG_ring_5_3255044 00:04:35.198 size: 0.125366 MiB name: RG_ring_2_3255044 00:04:35.198 size: 0.015991 MiB name: RG_ring_3_3255044 00:04:35.198 end memzones------- 00:04:35.198 23:42:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:35.198 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:35.198 list of free elements. size: 12.519348 MiB 00:04:35.198 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:35.198 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:35.198 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:35.198 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:35.198 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:35.198 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:35.198 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:35.198 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:35.198 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:35.198 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:35.198 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:35.198 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:35.198 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:35.198 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:35.198 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:35.198 list of standard malloc elements. size: 199.218079 MiB 00:04:35.198 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:35.198 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:35.198 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:35.198 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:35.198 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:35.198 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:35.198 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:35.198 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:35.198 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:35.198 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:35.198 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:35.198 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:35.198 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:35.198 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:35.198 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:35.198 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:35.198 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:35.198 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:35.198 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:35.198 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:35.198 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:35.198 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:35.198 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:35.198 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:35.198 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:35.198 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:35.198 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:35.198 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:35.198 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:35.198 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:35.198 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:35.198 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:35.198 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:35.198 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:35.198 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:35.198 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:35.198 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:35.198 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:35.198 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:35.198 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:35.198 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:35.198 list of memzone associated elements. size: 602.262573 MiB 00:04:35.198 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:35.198 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:35.199 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:35.199 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:35.199 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:35.199 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_3255044_0 00:04:35.199 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:35.199 associated memzone info: size: 48.002930 MiB name: MP_evtpool_3255044_0 00:04:35.199 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:35.199 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3255044_0 00:04:35.199 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:35.199 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:35.199 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:35.199 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:35.199 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:35.199 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_3255044 00:04:35.199 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:35.199 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3255044 00:04:35.199 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:35.199 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3255044 00:04:35.199 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:35.199 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:35.199 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:35.199 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:35.199 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:35.199 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:35.199 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:35.199 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:35.199 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:35.199 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3255044 00:04:35.199 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:35.199 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3255044 00:04:35.199 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:35.199 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3255044 00:04:35.199 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:35.199 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3255044 00:04:35.199 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:35.199 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3255044 00:04:35.199 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:35.199 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:35.199 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:35.199 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:35.199 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:35.199 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:35.199 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:35.199 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3255044 00:04:35.199 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:35.199 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:35.199 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:35.199 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:35.199 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:35.199 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3255044 00:04:35.199 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:35.199 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:35.199 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:35.199 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3255044 00:04:35.199 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:35.199 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3255044 00:04:35.199 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:35.199 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:35.199 23:42:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:35.199 23:42:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3255044 00:04:35.199 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 3255044 ']' 00:04:35.199 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 3255044 00:04:35.199 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:35.199 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:35.199 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3255044 00:04:35.199 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:35.199 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:35.199 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3255044' 00:04:35.199 killing process with pid 3255044 00:04:35.199 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 3255044 00:04:35.199 23:42:05 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 3255044 00:04:35.765 00:04:35.765 real 0m1.632s 00:04:35.765 user 0m1.793s 00:04:35.765 sys 0m0.448s 00:04:35.765 23:42:06 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.765 23:42:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:35.765 ************************************ 00:04:35.765 END TEST dpdk_mem_utility 00:04:35.765 ************************************ 00:04:35.765 23:42:06 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:35.765 23:42:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.765 23:42:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.765 23:42:06 -- common/autotest_common.sh@10 -- # set +x 00:04:35.765 ************************************ 00:04:35.765 START TEST event 00:04:35.765 ************************************ 00:04:35.765 23:42:06 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:35.765 * Looking for test storage... 00:04:35.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:35.765 23:42:06 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:35.765 23:42:06 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:35.765 23:42:06 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:35.765 23:42:06 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:35.765 23:42:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.765 23:42:06 event -- common/autotest_common.sh@10 -- # set +x 00:04:35.765 ************************************ 00:04:35.765 START TEST event_perf 00:04:35.765 ************************************ 00:04:35.765 23:42:06 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:35.765 Running I/O for 1 seconds...[2024-07-24 23:42:06.318238] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:04:35.765 [2024-07-24 23:42:06.318325] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255246 ] 00:04:35.765 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.024 [2024-07-24 23:42:06.385412] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:36.024 [2024-07-24 23:42:06.513272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:36.024 [2024-07-24 23:42:06.513316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:36.024 [2024-07-24 23:42:06.513386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:36.024 [2024-07-24 23:42:06.513389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.395 Running I/O for 1 seconds... 00:04:37.395 lcore 0: 228077 00:04:37.395 lcore 1: 228077 00:04:37.395 lcore 2: 228076 00:04:37.395 lcore 3: 228077 00:04:37.395 done. 00:04:37.395 00:04:37.395 real 0m1.332s 00:04:37.395 user 0m4.223s 00:04:37.395 sys 0m0.094s 00:04:37.395 23:42:07 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.395 23:42:07 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:37.395 ************************************ 00:04:37.395 END TEST event_perf 00:04:37.395 ************************************ 00:04:37.395 23:42:07 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:37.395 23:42:07 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:37.395 23:42:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.395 23:42:07 event -- common/autotest_common.sh@10 -- # set +x 00:04:37.395 ************************************ 00:04:37.395 START TEST event_reactor 00:04:37.395 ************************************ 00:04:37.395 23:42:07 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:37.395 [2024-07-24 23:42:07.697032] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:04:37.395 [2024-07-24 23:42:07.697099] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3255620 ] 00:04:37.395 EAL: No free 2048 kB hugepages reported on node 1 00:04:37.395 [2024-07-24 23:42:07.759024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.395 [2024-07-24 23:42:07.879912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.769 test_start 00:04:38.769 oneshot 00:04:38.769 tick 100 00:04:38.769 tick 100 00:04:38.769 tick 250 00:04:38.769 tick 100 00:04:38.769 tick 100 00:04:38.769 tick 100 00:04:38.769 tick 250 00:04:38.769 tick 500 00:04:38.769 tick 100 00:04:38.769 tick 100 00:04:38.769 tick 250 00:04:38.769 tick 100 00:04:38.769 tick 100 00:04:38.769 test_end 00:04:38.769 00:04:38.769 real 0m1.311s 00:04:38.769 user 0m1.217s 00:04:38.769 sys 0m0.088s 00:04:38.769 23:42:08 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.769 23:42:08 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:38.769 ************************************ 00:04:38.769 END TEST event_reactor 00:04:38.769 ************************************ 00:04:38.769 23:42:09 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:38.769 23:42:09 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:38.769 23:42:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.769 23:42:09 event -- common/autotest_common.sh@10 -- # set +x 00:04:38.769 ************************************ 00:04:38.769 START TEST event_reactor_perf 00:04:38.769 ************************************ 00:04:38.769 23:42:09 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:38.769 [2024-07-24 23:42:09.053406] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:04:38.769 [2024-07-24 23:42:09.053466] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256151 ] 00:04:38.769 EAL: No free 2048 kB hugepages reported on node 1 00:04:38.769 [2024-07-24 23:42:09.116160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.769 [2024-07-24 23:42:09.233378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.143 test_start 00:04:40.143 test_end 00:04:40.143 Performance: 358087 events per second 00:04:40.143 00:04:40.143 real 0m1.313s 00:04:40.143 user 0m1.231s 00:04:40.143 sys 0m0.076s 00:04:40.143 23:42:10 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.143 23:42:10 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:40.143 ************************************ 00:04:40.143 END TEST event_reactor_perf 00:04:40.143 ************************************ 00:04:40.143 23:42:10 event -- event/event.sh@49 -- # uname -s 00:04:40.143 23:42:10 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:40.143 23:42:10 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:40.143 23:42:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.143 23:42:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.143 23:42:10 event -- common/autotest_common.sh@10 -- # set +x 00:04:40.143 ************************************ 00:04:40.143 START TEST event_scheduler 00:04:40.143 ************************************ 00:04:40.144 23:42:10 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:40.144 * Looking for test storage... 00:04:40.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:40.144 23:42:10 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:40.144 23:42:10 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3256371 00:04:40.144 23:42:10 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:40.144 23:42:10 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.144 23:42:10 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3256371 00:04:40.144 23:42:10 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 3256371 ']' 00:04:40.144 23:42:10 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.144 23:42:10 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.144 23:42:10 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.144 23:42:10 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.144 23:42:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.144 [2024-07-24 23:42:10.487409] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:04:40.144 [2024-07-24 23:42:10.487496] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256371 ] 00:04:40.144 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.144 [2024-07-24 23:42:10.544534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:40.144 [2024-07-24 23:42:10.654187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.144 [2024-07-24 23:42:10.654229] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:40.144 [2024-07-24 23:42:10.654285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:40.144 [2024-07-24 23:42:10.654289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:40.144 23:42:10 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:40.144 23:42:10 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:04:40.144 23:42:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:40.144 23:42:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.144 23:42:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.144 [2024-07-24 23:42:10.703098] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:40.144 [2024-07-24 23:42:10.703123] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:40.144 [2024-07-24 23:42:10.703140] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:40.144 [2024-07-24 23:42:10.703150] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:40.144 [2024-07-24 23:42:10.703160] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:40.144 23:42:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.144 23:42:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:40.144 23:42:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.144 23:42:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.403 [2024-07-24 23:42:10.801399] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:40.403 23:42:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.403 23:42:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:40.403 23:42:10 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.403 23:42:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.403 23:42:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:40.403 ************************************ 00:04:40.403 START TEST scheduler_create_thread 00:04:40.403 ************************************ 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.403 2 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.403 3 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.403 4 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.403 5 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.403 6 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.403 7 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.403 8 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.403 9 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.403 10 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:40.403 23:42:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.969 23:42:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:40.969 00:04:40.969 real 0m0.590s 00:04:40.969 user 0m0.009s 00:04:40.969 sys 0m0.005s 00:04:40.969 23:42:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.969 23:42:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:40.969 ************************************ 00:04:40.969 END TEST scheduler_create_thread 00:04:40.969 ************************************ 00:04:40.969 23:42:11 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:40.969 23:42:11 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3256371 00:04:40.969 23:42:11 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 3256371 ']' 00:04:40.969 23:42:11 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 3256371 00:04:40.969 23:42:11 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:04:40.969 23:42:11 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:40.969 23:42:11 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3256371 00:04:40.969 23:42:11 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:04:40.969 23:42:11 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:04:40.969 23:42:11 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3256371' 00:04:40.969 killing process with pid 3256371 00:04:40.969 23:42:11 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 3256371 00:04:40.969 23:42:11 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 3256371 00:04:41.535 [2024-07-24 23:42:11.901572] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:41.794 00:04:41.794 real 0m1.767s 00:04:41.794 user 0m2.257s 00:04:41.794 sys 0m0.315s 00:04:41.794 23:42:12 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.794 23:42:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:41.794 ************************************ 00:04:41.794 END TEST event_scheduler 00:04:41.794 ************************************ 00:04:41.794 23:42:12 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:41.794 23:42:12 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:41.794 23:42:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.794 23:42:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.794 23:42:12 event -- common/autotest_common.sh@10 -- # set +x 00:04:41.794 ************************************ 00:04:41.794 START TEST app_repeat 00:04:41.794 ************************************ 00:04:41.794 23:42:12 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:04:41.794 23:42:12 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:41.794 23:42:12 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:41.794 23:42:12 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:41.794 23:42:12 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:41.794 23:42:12 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:41.794 23:42:12 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:41.794 23:42:12 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:41.794 23:42:12 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3256561 00:04:41.794 23:42:12 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:41.794 23:42:12 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.794 23:42:12 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3256561' 00:04:41.794 Process app_repeat pid: 3256561 00:04:41.794 23:42:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:41.794 23:42:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:41.794 spdk_app_start Round 0 00:04:41.794 23:42:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3256561 /var/tmp/spdk-nbd.sock 00:04:41.794 23:42:12 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3256561 ']' 00:04:41.794 23:42:12 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:41.794 23:42:12 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:41.794 23:42:12 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:41.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:41.794 23:42:12 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:41.794 23:42:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:41.794 [2024-07-24 23:42:12.245982] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:04:41.794 [2024-07-24 23:42:12.246048] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3256561 ] 00:04:41.794 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.794 [2024-07-24 23:42:12.308846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:42.053 [2024-07-24 23:42:12.428080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.053 [2024-07-24 23:42:12.428085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.053 23:42:12 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:42.053 23:42:12 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:42.053 23:42:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.311 Malloc0 00:04:42.311 23:42:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:42.569 Malloc1 00:04:42.569 23:42:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.569 23:42:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:42.827 /dev/nbd0 00:04:42.827 23:42:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:42.827 23:42:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:42.827 23:42:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:42.827 23:42:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:42.827 23:42:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:42.827 23:42:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:42.827 23:42:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:42.827 23:42:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:42.827 23:42:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:42.827 23:42:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:42.827 23:42:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:42.827 1+0 records in 00:04:42.827 1+0 records out 00:04:42.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216036 s, 19.0 MB/s 00:04:42.827 23:42:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.827 23:42:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:42.827 23:42:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:42.827 23:42:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:42.827 23:42:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:42.827 23:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:42.827 23:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:42.827 23:42:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:43.085 /dev/nbd1 00:04:43.085 23:42:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:43.085 23:42:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:43.085 23:42:13 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:43.085 23:42:13 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:43.085 23:42:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:43.085 23:42:13 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:43.085 23:42:13 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:43.085 23:42:13 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:43.085 23:42:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:43.085 23:42:13 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:43.085 23:42:13 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:43.085 1+0 records in 00:04:43.085 1+0 records out 00:04:43.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0002431 s, 16.8 MB/s 00:04:43.085 23:42:13 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:43.085 23:42:13 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:43.085 23:42:13 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:43.085 23:42:13 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:43.085 23:42:13 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:43.085 23:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:43.085 23:42:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:43.085 23:42:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:43.085 23:42:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.085 23:42:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:43.343 { 00:04:43.343 "nbd_device": "/dev/nbd0", 00:04:43.343 "bdev_name": "Malloc0" 00:04:43.343 }, 00:04:43.343 { 00:04:43.343 "nbd_device": "/dev/nbd1", 00:04:43.343 "bdev_name": "Malloc1" 00:04:43.343 } 00:04:43.343 ]' 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:43.343 { 00:04:43.343 "nbd_device": "/dev/nbd0", 00:04:43.343 "bdev_name": "Malloc0" 00:04:43.343 }, 00:04:43.343 { 00:04:43.343 "nbd_device": "/dev/nbd1", 00:04:43.343 "bdev_name": "Malloc1" 00:04:43.343 } 00:04:43.343 ]' 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:43.343 /dev/nbd1' 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:43.343 /dev/nbd1' 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:43.343 256+0 records in 00:04:43.343 256+0 records out 00:04:43.343 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00501508 s, 209 MB/s 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.343 23:42:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:43.601 256+0 records in 00:04:43.601 256+0 records out 00:04:43.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209898 s, 50.0 MB/s 00:04:43.601 23:42:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:43.601 23:42:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:43.601 256+0 records in 00:04:43.601 256+0 records out 00:04:43.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240154 s, 43.7 MB/s 00:04:43.601 23:42:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:43.601 23:42:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.601 23:42:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:43.601 23:42:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:43.601 23:42:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.601 23:42:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:43.601 23:42:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:43.601 23:42:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.601 23:42:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:43.601 23:42:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:43.601 23:42:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:43.601 23:42:14 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:43.601 23:42:14 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:43.601 23:42:14 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:43.601 23:42:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:43.601 23:42:14 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:43.601 23:42:14 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:43.601 23:42:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.601 23:42:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:43.859 23:42:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:43.859 23:42:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:43.859 23:42:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:43.859 23:42:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:43.859 23:42:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:43.859 23:42:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:43.859 23:42:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:43.859 23:42:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:43.859 23:42:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:43.859 23:42:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:44.116 23:42:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:44.116 23:42:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:44.116 23:42:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:44.116 23:42:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:44.116 23:42:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:44.116 23:42:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:44.116 23:42:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:44.116 23:42:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:44.116 23:42:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:44.116 23:42:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:44.116 23:42:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:44.374 23:42:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:44.374 23:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:44.374 23:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:44.374 23:42:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:44.374 23:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:44.374 23:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:44.374 23:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:44.374 23:42:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:44.374 23:42:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:44.374 23:42:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:44.374 23:42:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:44.374 23:42:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:44.374 23:42:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:44.632 23:42:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:44.890 [2024-07-24 23:42:15.388430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:45.148 [2024-07-24 23:42:15.503768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.148 [2024-07-24 23:42:15.503768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.148 [2024-07-24 23:42:15.565454] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:45.148 [2024-07-24 23:42:15.565543] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:47.675 23:42:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:47.675 23:42:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:47.675 spdk_app_start Round 1 00:04:47.675 23:42:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3256561 /var/tmp/spdk-nbd.sock 00:04:47.675 23:42:18 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3256561 ']' 00:04:47.675 23:42:18 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:47.675 23:42:18 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:47.675 23:42:18 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:47.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:47.675 23:42:18 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:47.675 23:42:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:47.933 23:42:18 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:47.933 23:42:18 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:47.933 23:42:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.195 Malloc0 00:04:48.195 23:42:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:48.502 Malloc1 00:04:48.502 23:42:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.502 23:42:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:48.786 /dev/nbd0 00:04:48.786 23:42:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:48.786 23:42:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:48.786 23:42:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:48.786 23:42:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:48.786 23:42:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:48.786 23:42:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:48.786 23:42:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:48.786 23:42:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:48.786 23:42:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:48.786 23:42:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:48.786 23:42:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:48.786 1+0 records in 00:04:48.786 1+0 records out 00:04:48.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197768 s, 20.7 MB/s 00:04:48.786 23:42:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.786 23:42:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:48.786 23:42:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:48.786 23:42:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:48.786 23:42:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:48.786 23:42:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:48.786 23:42:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:48.786 23:42:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:48.786 /dev/nbd1 00:04:48.786 23:42:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:49.044 23:42:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:49.044 23:42:19 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:49.044 23:42:19 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:49.044 23:42:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:49.044 23:42:19 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:49.044 23:42:19 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:49.044 23:42:19 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:49.044 23:42:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:49.044 23:42:19 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:49.044 23:42:19 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:49.044 1+0 records in 00:04:49.044 1+0 records out 00:04:49.044 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000257083 s, 15.9 MB/s 00:04:49.044 23:42:19 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.044 23:42:19 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:49.044 23:42:19 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:49.044 23:42:19 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:49.044 23:42:19 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:49.044 23:42:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:49.044 23:42:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:49.044 23:42:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.044 23:42:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.044 23:42:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:49.303 { 00:04:49.303 "nbd_device": "/dev/nbd0", 00:04:49.303 "bdev_name": "Malloc0" 00:04:49.303 }, 00:04:49.303 { 00:04:49.303 "nbd_device": "/dev/nbd1", 00:04:49.303 "bdev_name": "Malloc1" 00:04:49.303 } 00:04:49.303 ]' 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:49.303 { 00:04:49.303 "nbd_device": "/dev/nbd0", 00:04:49.303 "bdev_name": "Malloc0" 00:04:49.303 }, 00:04:49.303 { 00:04:49.303 "nbd_device": "/dev/nbd1", 00:04:49.303 "bdev_name": "Malloc1" 00:04:49.303 } 00:04:49.303 ]' 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:49.303 /dev/nbd1' 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:49.303 /dev/nbd1' 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:49.303 256+0 records in 00:04:49.303 256+0 records out 00:04:49.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00495365 s, 212 MB/s 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:49.303 256+0 records in 00:04:49.303 256+0 records out 00:04:49.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0233965 s, 44.8 MB/s 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:49.303 256+0 records in 00:04:49.303 256+0 records out 00:04:49.303 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0218801 s, 47.9 MB/s 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.303 23:42:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:49.561 23:42:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:49.561 23:42:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:49.561 23:42:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:49.561 23:42:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.561 23:42:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.561 23:42:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:49.561 23:42:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.561 23:42:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.561 23:42:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:49.562 23:42:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:49.819 23:42:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:49.820 23:42:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:49.820 23:42:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:49.820 23:42:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:49.820 23:42:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:49.820 23:42:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:49.820 23:42:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:49.820 23:42:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:49.820 23:42:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:49.820 23:42:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:49.820 23:42:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:50.078 23:42:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:50.078 23:42:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:50.078 23:42:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:50.078 23:42:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:50.078 23:42:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:50.078 23:42:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:50.078 23:42:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:50.078 23:42:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:50.078 23:42:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:50.078 23:42:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:50.078 23:42:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:50.078 23:42:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:50.078 23:42:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:50.335 23:42:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:50.593 [2024-07-24 23:42:21.179461] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.851 [2024-07-24 23:42:21.295252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.851 [2024-07-24 23:42:21.295253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.851 [2024-07-24 23:42:21.353977] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:50.851 [2024-07-24 23:42:21.354050] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:53.379 23:42:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:53.379 23:42:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:53.379 spdk_app_start Round 2 00:04:53.379 23:42:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3256561 /var/tmp/spdk-nbd.sock 00:04:53.379 23:42:23 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3256561 ']' 00:04:53.379 23:42:23 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:53.379 23:42:23 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:53.379 23:42:23 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:53.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:53.379 23:42:23 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:53.379 23:42:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:53.637 23:42:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:53.637 23:42:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:53.637 23:42:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:53.895 Malloc0 00:04:53.895 23:42:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:54.154 Malloc1 00:04:54.154 23:42:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.154 23:42:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:54.412 /dev/nbd0 00:04:54.412 23:42:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:54.412 23:42:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:54.412 23:42:24 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:04:54.412 23:42:24 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:54.412 23:42:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:54.412 23:42:24 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:54.412 23:42:24 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:04:54.412 23:42:24 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:54.412 23:42:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:54.412 23:42:24 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:54.412 23:42:24 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.412 1+0 records in 00:04:54.412 1+0 records out 00:04:54.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227383 s, 18.0 MB/s 00:04:54.412 23:42:24 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.412 23:42:24 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:54.412 23:42:24 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.412 23:42:24 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:54.412 23:42:24 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:54.412 23:42:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.412 23:42:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.412 23:42:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:54.670 /dev/nbd1 00:04:54.670 23:42:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:54.670 23:42:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:54.670 23:42:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:04:54.670 23:42:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:04:54.670 23:42:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:04:54.670 23:42:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:04:54.670 23:42:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:04:54.670 23:42:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:04:54.670 23:42:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:04:54.670 23:42:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:04:54.670 23:42:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:54.670 1+0 records in 00:04:54.670 1+0 records out 00:04:54.670 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209436 s, 19.6 MB/s 00:04:54.670 23:42:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.670 23:42:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:04:54.670 23:42:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:54.670 23:42:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:04:54.670 23:42:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:04:54.670 23:42:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:54.670 23:42:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:54.670 23:42:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:54.670 23:42:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:54.670 23:42:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:54.928 { 00:04:54.928 "nbd_device": "/dev/nbd0", 00:04:54.928 "bdev_name": "Malloc0" 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "nbd_device": "/dev/nbd1", 00:04:54.928 "bdev_name": "Malloc1" 00:04:54.928 } 00:04:54.928 ]' 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:54.928 { 00:04:54.928 "nbd_device": "/dev/nbd0", 00:04:54.928 "bdev_name": "Malloc0" 00:04:54.928 }, 00:04:54.928 { 00:04:54.928 "nbd_device": "/dev/nbd1", 00:04:54.928 "bdev_name": "Malloc1" 00:04:54.928 } 00:04:54.928 ]' 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:54.928 /dev/nbd1' 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:54.928 /dev/nbd1' 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:54.928 256+0 records in 00:04:54.928 256+0 records out 00:04:54.928 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00491397 s, 213 MB/s 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:54.928 23:42:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:55.186 256+0 records in 00:04:55.186 256+0 records out 00:04:55.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0209901 s, 50.0 MB/s 00:04:55.186 23:42:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:55.186 23:42:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:55.186 256+0 records in 00:04:55.186 256+0 records out 00:04:55.186 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023996 s, 43.7 MB/s 00:04:55.186 23:42:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.187 23:42:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:55.444 23:42:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:55.444 23:42:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:55.444 23:42:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:55.444 23:42:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.444 23:42:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.444 23:42:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:55.444 23:42:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.444 23:42:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.445 23:42:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:55.445 23:42:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:55.702 23:42:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:55.702 23:42:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:55.702 23:42:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:55.702 23:42:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:55.702 23:42:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:55.702 23:42:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:55.702 23:42:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:55.702 23:42:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:55.702 23:42:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:55.702 23:42:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:55.702 23:42:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:55.960 23:42:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:55.960 23:42:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:55.960 23:42:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:55.960 23:42:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:55.960 23:42:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:55.960 23:42:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:55.960 23:42:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:55.960 23:42:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:55.960 23:42:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:55.960 23:42:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:55.960 23:42:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:55.960 23:42:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:55.960 23:42:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:56.218 23:42:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:56.475 [2024-07-24 23:42:26.973417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.735 [2024-07-24 23:42:27.087938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.735 [2024-07-24 23:42:27.087938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.735 [2024-07-24 23:42:27.148496] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:56.735 [2024-07-24 23:42:27.148570] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:59.260 23:42:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3256561 /var/tmp/spdk-nbd.sock 00:04:59.260 23:42:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 3256561 ']' 00:04:59.260 23:42:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:59.260 23:42:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.260 23:42:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:59.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:59.260 23:42:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.260 23:42:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.518 23:42:29 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:59.518 23:42:29 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:04:59.518 23:42:29 event.app_repeat -- event/event.sh@39 -- # killprocess 3256561 00:04:59.518 23:42:29 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 3256561 ']' 00:04:59.518 23:42:29 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 3256561 00:04:59.518 23:42:29 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:04:59.518 23:42:29 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:59.518 23:42:29 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3256561 00:04:59.518 23:42:29 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:59.518 23:42:29 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:59.519 23:42:29 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3256561' 00:04:59.519 killing process with pid 3256561 00:04:59.519 23:42:29 event.app_repeat -- common/autotest_common.sh@967 -- # kill 3256561 00:04:59.519 23:42:29 event.app_repeat -- common/autotest_common.sh@972 -- # wait 3256561 00:04:59.776 spdk_app_start is called in Round 0. 00:04:59.776 Shutdown signal received, stop current app iteration 00:04:59.776 Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 reinitialization... 00:04:59.776 spdk_app_start is called in Round 1. 00:04:59.776 Shutdown signal received, stop current app iteration 00:04:59.776 Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 reinitialization... 00:04:59.776 spdk_app_start is called in Round 2. 00:04:59.776 Shutdown signal received, stop current app iteration 00:04:59.776 Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 reinitialization... 00:04:59.776 spdk_app_start is called in Round 3. 00:04:59.776 Shutdown signal received, stop current app iteration 00:04:59.776 23:42:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:59.776 23:42:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:59.776 00:04:59.776 real 0m18.018s 00:04:59.776 user 0m38.980s 00:04:59.776 sys 0m3.184s 00:04:59.776 23:42:30 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.776 23:42:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:59.776 ************************************ 00:04:59.776 END TEST app_repeat 00:04:59.776 ************************************ 00:04:59.776 23:42:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:59.776 23:42:30 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:59.776 23:42:30 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.776 23:42:30 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.776 23:42:30 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.776 ************************************ 00:04:59.776 START TEST cpu_locks 00:04:59.776 ************************************ 00:04:59.776 23:42:30 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:59.776 * Looking for test storage... 00:04:59.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:59.776 23:42:30 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:59.776 23:42:30 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:59.776 23:42:30 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:59.776 23:42:30 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:59.776 23:42:30 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.776 23:42:30 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.776 23:42:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:59.776 ************************************ 00:04:59.777 START TEST default_locks 00:04:59.777 ************************************ 00:04:59.777 23:42:30 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:04:59.777 23:42:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3258974 00:04:59.777 23:42:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.777 23:42:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3258974 00:04:59.777 23:42:30 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3258974 ']' 00:04:59.777 23:42:30 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.777 23:42:30 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:59.777 23:42:30 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.777 23:42:30 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:59.777 23:42:30 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:00.072 [2024-07-24 23:42:30.422621] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:00.072 [2024-07-24 23:42:30.422718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3258974 ] 00:05:00.072 EAL: No free 2048 kB hugepages reported on node 1 00:05:00.072 [2024-07-24 23:42:30.479506] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.072 [2024-07-24 23:42:30.593676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.330 23:42:30 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:00.330 23:42:30 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:00.330 23:42:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3258974 00:05:00.330 23:42:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3258974 00:05:00.330 23:42:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:00.896 lslocks: write error 00:05:00.896 23:42:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3258974 00:05:00.896 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 3258974 ']' 00:05:00.896 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 3258974 00:05:00.896 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:00.896 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:00.896 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3258974 00:05:00.896 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:00.896 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:00.896 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3258974' 00:05:00.896 killing process with pid 3258974 00:05:00.896 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 3258974 00:05:00.896 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 3258974 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3258974 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3258974 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 3258974 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 3258974 ']' 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3258974) - No such process 00:05:01.154 ERROR: process (pid: 3258974) is no longer running 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:01.154 00:05:01.154 real 0m1.349s 00:05:01.154 user 0m1.264s 00:05:01.154 sys 0m0.562s 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.154 23:42:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.154 ************************************ 00:05:01.154 END TEST default_locks 00:05:01.154 ************************************ 00:05:01.154 23:42:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:01.154 23:42:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.154 23:42:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.154 23:42:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:01.154 ************************************ 00:05:01.154 START TEST default_locks_via_rpc 00:05:01.154 ************************************ 00:05:01.154 23:42:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:01.154 23:42:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3259197 00:05:01.154 23:42:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:01.154 23:42:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3259197 00:05:01.155 23:42:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3259197 ']' 00:05:01.155 23:42:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:01.155 23:42:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:01.155 23:42:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:01.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:01.155 23:42:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:01.155 23:42:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:01.413 [2024-07-24 23:42:31.816102] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:01.413 [2024-07-24 23:42:31.816181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259197 ] 00:05:01.413 EAL: No free 2048 kB hugepages reported on node 1 00:05:01.413 [2024-07-24 23:42:31.880390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.413 [2024-07-24 23:42:31.995829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3259197 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3259197 00:05:02.344 23:42:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:02.600 23:42:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3259197 00:05:02.600 23:42:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 3259197 ']' 00:05:02.600 23:42:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 3259197 00:05:02.601 23:42:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:02.601 23:42:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:02.601 23:42:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3259197 00:05:02.601 23:42:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:02.601 23:42:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:02.601 23:42:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3259197' 00:05:02.601 killing process with pid 3259197 00:05:02.601 23:42:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 3259197 00:05:02.601 23:42:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 3259197 00:05:03.166 00:05:03.166 real 0m1.783s 00:05:03.166 user 0m1.940s 00:05:03.166 sys 0m0.536s 00:05:03.166 23:42:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.166 23:42:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.166 ************************************ 00:05:03.166 END TEST default_locks_via_rpc 00:05:03.166 ************************************ 00:05:03.166 23:42:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:03.166 23:42:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:03.166 23:42:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.166 23:42:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:03.166 ************************************ 00:05:03.166 START TEST non_locking_app_on_locked_coremask 00:05:03.166 ************************************ 00:05:03.166 23:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:03.166 23:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3259371 00:05:03.166 23:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.166 23:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3259371 /var/tmp/spdk.sock 00:05:03.166 23:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3259371 ']' 00:05:03.166 23:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.166 23:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.166 23:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.166 23:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.166 23:42:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.166 [2024-07-24 23:42:33.653255] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:03.166 [2024-07-24 23:42:33.653338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259371 ] 00:05:03.166 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.166 [2024-07-24 23:42:33.715921] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.424 [2024-07-24 23:42:33.831955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.709 23:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.709 23:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:03.709 23:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3259496 00:05:03.709 23:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3259496 /var/tmp/spdk2.sock 00:05:03.709 23:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:03.709 23:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3259496 ']' 00:05:03.709 23:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:03.709 23:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.709 23:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:03.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:03.709 23:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.709 23:42:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:03.709 [2024-07-24 23:42:34.148550] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:03.709 [2024-07-24 23:42:34.148638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259496 ] 00:05:03.709 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.709 [2024-07-24 23:42:34.244475] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:03.709 [2024-07-24 23:42:34.244520] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.967 [2024-07-24 23:42:34.478291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.532 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:04.532 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:04.532 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3259371 00:05:04.532 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3259371 00:05:04.532 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:05.097 lslocks: write error 00:05:05.097 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3259371 00:05:05.097 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3259371 ']' 00:05:05.097 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3259371 00:05:05.098 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:05.098 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:05.098 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3259371 00:05:05.098 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:05.098 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:05.098 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3259371' 00:05:05.098 killing process with pid 3259371 00:05:05.098 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3259371 00:05:05.098 23:42:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3259371 00:05:06.031 23:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3259496 00:05:06.031 23:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3259496 ']' 00:05:06.031 23:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3259496 00:05:06.031 23:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:06.031 23:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.031 23:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3259496 00:05:06.031 23:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.031 23:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.031 23:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3259496' 00:05:06.031 killing process with pid 3259496 00:05:06.031 23:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3259496 00:05:06.031 23:42:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3259496 00:05:06.597 00:05:06.597 real 0m3.497s 00:05:06.597 user 0m3.623s 00:05:06.597 sys 0m1.105s 00:05:06.597 23:42:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.597 23:42:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.597 ************************************ 00:05:06.597 END TEST non_locking_app_on_locked_coremask 00:05:06.597 ************************************ 00:05:06.597 23:42:37 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:06.597 23:42:37 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:06.597 23:42:37 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.597 23:42:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:06.597 ************************************ 00:05:06.597 START TEST locking_app_on_unlocked_coremask 00:05:06.597 ************************************ 00:05:06.597 23:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:06.597 23:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3259880 00:05:06.597 23:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:06.597 23:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3259880 /var/tmp/spdk.sock 00:05:06.597 23:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3259880 ']' 00:05:06.597 23:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:06.597 23:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:06.597 23:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:06.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:06.597 23:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:06.597 23:42:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:06.597 [2024-07-24 23:42:37.199454] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:06.597 [2024-07-24 23:42:37.199535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259880 ] 00:05:06.855 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.855 [2024-07-24 23:42:37.261372] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:06.855 [2024-07-24 23:42:37.261408] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.855 [2024-07-24 23:42:37.379074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.788 23:42:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:07.788 23:42:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:07.788 23:42:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3259945 00:05:07.788 23:42:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:07.788 23:42:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3259945 /var/tmp/spdk2.sock 00:05:07.788 23:42:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3259945 ']' 00:05:07.788 23:42:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:07.788 23:42:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:07.788 23:42:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:07.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:07.788 23:42:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:07.788 23:42:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:07.788 [2024-07-24 23:42:38.177572] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:07.788 [2024-07-24 23:42:38.177655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3259945 ] 00:05:07.788 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.788 [2024-07-24 23:42:38.276401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.046 [2024-07-24 23:42:38.510013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.613 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:08.613 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:08.613 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3259945 00:05:08.613 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3259945 00:05:08.613 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:09.179 lslocks: write error 00:05:09.179 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3259880 00:05:09.179 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3259880 ']' 00:05:09.179 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3259880 00:05:09.179 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:09.179 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:09.179 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3259880 00:05:09.179 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:09.179 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:09.179 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3259880' 00:05:09.179 killing process with pid 3259880 00:05:09.179 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3259880 00:05:09.179 23:42:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3259880 00:05:10.112 23:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3259945 00:05:10.112 23:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3259945 ']' 00:05:10.112 23:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 3259945 00:05:10.112 23:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:10.112 23:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.112 23:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3259945 00:05:10.112 23:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.112 23:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.112 23:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3259945' 00:05:10.112 killing process with pid 3259945 00:05:10.112 23:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 3259945 00:05:10.112 23:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 3259945 00:05:10.370 00:05:10.370 real 0m3.780s 00:05:10.370 user 0m4.114s 00:05:10.370 sys 0m1.069s 00:05:10.370 23:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.370 23:42:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.370 ************************************ 00:05:10.370 END TEST locking_app_on_unlocked_coremask 00:05:10.370 ************************************ 00:05:10.370 23:42:40 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:10.370 23:42:40 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.370 23:42:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.370 23:42:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.370 ************************************ 00:05:10.370 START TEST locking_app_on_locked_coremask 00:05:10.370 ************************************ 00:05:10.370 23:42:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:10.370 23:42:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3260371 00:05:10.370 23:42:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.370 23:42:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3260371 /var/tmp/spdk.sock 00:05:10.370 23:42:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3260371 ']' 00:05:10.370 23:42:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.370 23:42:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.370 23:42:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.370 23:42:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.370 23:42:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:10.628 [2024-07-24 23:42:41.025608] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:10.628 [2024-07-24 23:42:41.025711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3260371 ] 00:05:10.628 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.628 [2024-07-24 23:42:41.092501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.628 [2024-07-24 23:42:41.213005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.886 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:10.886 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:10.886 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3260385 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3260385 /var/tmp/spdk2.sock 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3260385 /var/tmp/spdk2.sock 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3260385 /var/tmp/spdk2.sock 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 3260385 ']' 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:10.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:10.887 23:42:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:11.145 [2024-07-24 23:42:41.527396] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:11.145 [2024-07-24 23:42:41.527485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3260385 ] 00:05:11.145 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.145 [2024-07-24 23:42:41.624581] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3260371 has claimed it. 00:05:11.145 [2024-07-24 23:42:41.624641] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:11.710 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3260385) - No such process 00:05:11.710 ERROR: process (pid: 3260385) is no longer running 00:05:11.710 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:11.710 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:11.710 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:11.710 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:11.710 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:11.710 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:11.710 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3260371 00:05:11.710 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3260371 00:05:11.710 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:12.276 lslocks: write error 00:05:12.276 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3260371 00:05:12.276 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 3260371 ']' 00:05:12.276 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 3260371 00:05:12.276 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:12.276 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:12.276 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3260371 00:05:12.276 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:12.276 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:12.276 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3260371' 00:05:12.276 killing process with pid 3260371 00:05:12.276 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 3260371 00:05:12.276 23:42:42 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 3260371 00:05:12.534 00:05:12.534 real 0m2.098s 00:05:12.534 user 0m2.303s 00:05:12.534 sys 0m0.661s 00:05:12.534 23:42:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.534 23:42:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.534 ************************************ 00:05:12.534 END TEST locking_app_on_locked_coremask 00:05:12.534 ************************************ 00:05:12.534 23:42:43 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:12.534 23:42:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.534 23:42:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.534 23:42:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.534 ************************************ 00:05:12.534 START TEST locking_overlapped_coremask 00:05:12.534 ************************************ 00:05:12.534 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:12.534 23:42:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3260669 00:05:12.534 23:42:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:12.534 23:42:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3260669 /var/tmp/spdk.sock 00:05:12.534 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3260669 ']' 00:05:12.534 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.534 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.534 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.534 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.534 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.792 [2024-07-24 23:42:43.165799] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:12.792 [2024-07-24 23:42:43.165899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3260669 ] 00:05:12.792 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.792 [2024-07-24 23:42:43.230685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.792 [2024-07-24 23:42:43.356289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.792 [2024-07-24 23:42:43.356466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.792 [2024-07-24 23:42:43.356471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3260685 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3260685 /var/tmp/spdk2.sock 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 3260685 /var/tmp/spdk2.sock 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 3260685 /var/tmp/spdk2.sock 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 3260685 ']' 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:13.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.050 23:42:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:13.050 [2024-07-24 23:42:43.659824] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:13.050 [2024-07-24 23:42:43.659906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3260685 ] 00:05:13.308 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.308 [2024-07-24 23:42:43.747588] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3260669 has claimed it. 00:05:13.308 [2024-07-24 23:42:43.747648] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:13.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (3260685) - No such process 00:05:13.874 ERROR: process (pid: 3260685) is no longer running 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3260669 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 3260669 ']' 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 3260669 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3260669 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3260669' 00:05:13.874 killing process with pid 3260669 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 3260669 00:05:13.874 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 3260669 00:05:14.439 00:05:14.439 real 0m1.730s 00:05:14.439 user 0m4.563s 00:05:14.439 sys 0m0.459s 00:05:14.439 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.439 23:42:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:14.439 ************************************ 00:05:14.439 END TEST locking_overlapped_coremask 00:05:14.439 ************************************ 00:05:14.439 23:42:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:14.439 23:42:44 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.439 23:42:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.439 23:42:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.439 ************************************ 00:05:14.439 START TEST locking_overlapped_coremask_via_rpc 00:05:14.439 ************************************ 00:05:14.439 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:14.439 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3260855 00:05:14.439 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:14.439 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3260855 /var/tmp/spdk.sock 00:05:14.439 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3260855 ']' 00:05:14.439 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.440 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.440 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.440 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.440 23:42:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.440 [2024-07-24 23:42:44.946835] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:14.440 [2024-07-24 23:42:44.946948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3260855 ] 00:05:14.440 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.440 [2024-07-24 23:42:45.004839] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.440 [2024-07-24 23:42:45.004873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:14.698 [2024-07-24 23:42:45.116876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.698 [2024-07-24 23:42:45.120274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:14.698 [2024-07-24 23:42:45.120288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.956 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.956 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:14.956 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3260973 00:05:14.956 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3260973 /var/tmp/spdk2.sock 00:05:14.956 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3260973 ']' 00:05:14.956 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:14.956 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.956 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:14.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:14.956 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.956 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:14.956 23:42:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.956 [2024-07-24 23:42:45.417013] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:14.956 [2024-07-24 23:42:45.417096] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3260973 ] 00:05:14.956 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.956 [2024-07-24 23:42:45.504178] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:14.956 [2024-07-24 23:42:45.504210] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:15.214 [2024-07-24 23:42:45.727451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:15.214 [2024-07-24 23:42:45.727514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:15.214 [2024-07-24 23:42:45.727517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.780 [2024-07-24 23:42:46.363357] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3260855 has claimed it. 00:05:15.780 request: 00:05:15.780 { 00:05:15.780 "method": "framework_enable_cpumask_locks", 00:05:15.780 "req_id": 1 00:05:15.780 } 00:05:15.780 Got JSON-RPC error response 00:05:15.780 response: 00:05:15.780 { 00:05:15.780 "code": -32603, 00:05:15.780 "message": "Failed to claim CPU core: 2" 00:05:15.780 } 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3260855 /var/tmp/spdk.sock 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3260855 ']' 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.780 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.037 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.037 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:16.037 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3260973 /var/tmp/spdk2.sock 00:05:16.037 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 3260973 ']' 00:05:16.037 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.038 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.038 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.038 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.038 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.295 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.295 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:16.295 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:16.295 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:16.295 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:16.295 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:16.295 00:05:16.295 real 0m1.966s 00:05:16.295 user 0m1.024s 00:05:16.295 sys 0m0.164s 00:05:16.295 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.295 23:42:46 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.295 ************************************ 00:05:16.295 END TEST locking_overlapped_coremask_via_rpc 00:05:16.295 ************************************ 00:05:16.295 23:42:46 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:16.295 23:42:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3260855 ]] 00:05:16.295 23:42:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3260855 00:05:16.295 23:42:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3260855 ']' 00:05:16.295 23:42:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3260855 00:05:16.295 23:42:46 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:16.295 23:42:46 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.295 23:42:46 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3260855 00:05:16.553 23:42:46 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.553 23:42:46 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.553 23:42:46 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3260855' 00:05:16.553 killing process with pid 3260855 00:05:16.553 23:42:46 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3260855 00:05:16.553 23:42:46 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3260855 00:05:16.810 23:42:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3260973 ]] 00:05:16.810 23:42:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3260973 00:05:16.810 23:42:47 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3260973 ']' 00:05:16.810 23:42:47 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3260973 00:05:16.810 23:42:47 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:16.810 23:42:47 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.810 23:42:47 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3260973 00:05:16.810 23:42:47 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:16.810 23:42:47 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:16.810 23:42:47 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3260973' 00:05:16.810 killing process with pid 3260973 00:05:16.810 23:42:47 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 3260973 00:05:16.810 23:42:47 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 3260973 00:05:17.373 23:42:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:17.373 23:42:47 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:17.373 23:42:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3260855 ]] 00:05:17.373 23:42:47 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3260855 00:05:17.373 23:42:47 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3260855 ']' 00:05:17.373 23:42:47 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3260855 00:05:17.373 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3260855) - No such process 00:05:17.374 23:42:47 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3260855 is not found' 00:05:17.374 Process with pid 3260855 is not found 00:05:17.374 23:42:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3260973 ]] 00:05:17.374 23:42:47 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3260973 00:05:17.374 23:42:47 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 3260973 ']' 00:05:17.374 23:42:47 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 3260973 00:05:17.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3260973) - No such process 00:05:17.374 23:42:47 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 3260973 is not found' 00:05:17.374 Process with pid 3260973 is not found 00:05:17.374 23:42:47 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:17.374 00:05:17.374 real 0m17.543s 00:05:17.374 user 0m29.666s 00:05:17.374 sys 0m5.444s 00:05:17.374 23:42:47 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.374 23:42:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:17.374 ************************************ 00:05:17.374 END TEST cpu_locks 00:05:17.374 ************************************ 00:05:17.374 00:05:17.374 real 0m41.627s 00:05:17.374 user 1m17.704s 00:05:17.374 sys 0m9.439s 00:05:17.374 23:42:47 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.374 23:42:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.374 ************************************ 00:05:17.374 END TEST event 00:05:17.374 ************************************ 00:05:17.374 23:42:47 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:17.374 23:42:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.374 23:42:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.374 23:42:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.374 ************************************ 00:05:17.374 START TEST thread 00:05:17.374 ************************************ 00:05:17.374 23:42:47 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:17.374 * Looking for test storage... 00:05:17.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:17.374 23:42:47 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:17.374 23:42:47 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:17.374 23:42:47 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.374 23:42:47 thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.374 ************************************ 00:05:17.374 START TEST thread_poller_perf 00:05:17.374 ************************************ 00:05:17.374 23:42:47 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:17.631 [2024-07-24 23:42:47.994615] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:17.631 [2024-07-24 23:42:47.994679] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261342 ] 00:05:17.631 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.631 [2024-07-24 23:42:48.058407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.631 [2024-07-24 23:42:48.174626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.631 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:19.001 ====================================== 00:05:19.001 busy:2712226144 (cyc) 00:05:19.001 total_run_count: 292000 00:05:19.001 tsc_hz: 2700000000 (cyc) 00:05:19.001 ====================================== 00:05:19.001 poller_cost: 9288 (cyc), 3440 (nsec) 00:05:19.001 00:05:19.001 real 0m1.323s 00:05:19.001 user 0m1.234s 00:05:19.001 sys 0m0.084s 00:05:19.001 23:42:49 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.001 23:42:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.001 ************************************ 00:05:19.001 END TEST thread_poller_perf 00:05:19.002 ************************************ 00:05:19.002 23:42:49 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:19.002 23:42:49 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:19.002 23:42:49 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.002 23:42:49 thread -- common/autotest_common.sh@10 -- # set +x 00:05:19.002 ************************************ 00:05:19.002 START TEST thread_poller_perf 00:05:19.002 ************************************ 00:05:19.002 23:42:49 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:19.002 [2024-07-24 23:42:49.364458] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:19.002 [2024-07-24 23:42:49.364521] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261501 ] 00:05:19.002 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.002 [2024-07-24 23:42:49.426365] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.002 [2024-07-24 23:42:49.545559] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.002 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:20.379 ====================================== 00:05:20.379 busy:2703111493 (cyc) 00:05:20.379 total_run_count: 3857000 00:05:20.379 tsc_hz: 2700000000 (cyc) 00:05:20.379 ====================================== 00:05:20.379 poller_cost: 700 (cyc), 259 (nsec) 00:05:20.379 00:05:20.379 real 0m1.319s 00:05:20.379 user 0m1.233s 00:05:20.379 sys 0m0.080s 00:05:20.379 23:42:50 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.379 23:42:50 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.379 ************************************ 00:05:20.379 END TEST thread_poller_perf 00:05:20.379 ************************************ 00:05:20.379 23:42:50 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:20.379 00:05:20.379 real 0m2.785s 00:05:20.379 user 0m2.514s 00:05:20.379 sys 0m0.269s 00:05:20.379 23:42:50 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.379 23:42:50 thread -- common/autotest_common.sh@10 -- # set +x 00:05:20.379 ************************************ 00:05:20.379 END TEST thread 00:05:20.379 ************************************ 00:05:20.379 23:42:50 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:20.379 23:42:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:20.379 23:42:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.379 23:42:50 -- common/autotest_common.sh@10 -- # set +x 00:05:20.379 ************************************ 00:05:20.379 START TEST accel 00:05:20.379 ************************************ 00:05:20.379 23:42:50 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:20.379 * Looking for test storage... 00:05:20.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:20.379 23:42:50 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:05:20.379 23:42:50 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:05:20.379 23:42:50 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:20.379 23:42:50 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=3261813 00:05:20.379 23:42:50 accel -- accel/accel.sh@63 -- # waitforlisten 3261813 00:05:20.379 23:42:50 accel -- common/autotest_common.sh@829 -- # '[' -z 3261813 ']' 00:05:20.379 23:42:50 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.379 23:42:50 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:20.379 23:42:50 accel -- accel/accel.sh@61 -- # build_accel_config 00:05:20.379 23:42:50 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:20.379 23:42:50 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.379 23:42:50 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.379 23:42:50 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.379 23:42:50 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:20.379 23:42:50 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.379 23:42:50 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.379 23:42:50 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.379 23:42:50 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.379 23:42:50 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:20.379 23:42:50 accel -- accel/accel.sh@41 -- # jq -r . 00:05:20.379 [2024-07-24 23:42:50.840454] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:20.379 [2024-07-24 23:42:50.840562] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261813 ] 00:05:20.379 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.379 [2024-07-24 23:42:50.901723] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.645 [2024-07-24 23:42:51.010204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.902 23:42:51 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.902 23:42:51 accel -- common/autotest_common.sh@862 -- # return 0 00:05:20.902 23:42:51 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:05:20.902 23:42:51 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:05:20.902 23:42:51 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:05:20.903 23:42:51 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:05:20.903 23:42:51 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:20.903 23:42:51 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:05:20.903 23:42:51 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:20.903 23:42:51 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:20.903 23:42:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:20.903 23:42:51 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # IFS== 00:05:20.903 23:42:51 accel -- accel/accel.sh@72 -- # read -r opc module 00:05:20.903 23:42:51 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:05:20.903 23:42:51 accel -- accel/accel.sh@75 -- # killprocess 3261813 00:05:20.903 23:42:51 accel -- common/autotest_common.sh@948 -- # '[' -z 3261813 ']' 00:05:20.903 23:42:51 accel -- common/autotest_common.sh@952 -- # kill -0 3261813 00:05:20.903 23:42:51 accel -- common/autotest_common.sh@953 -- # uname 00:05:20.903 23:42:51 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:20.903 23:42:51 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3261813 00:05:20.903 23:42:51 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:20.903 23:42:51 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:20.903 23:42:51 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3261813' 00:05:20.903 killing process with pid 3261813 00:05:20.903 23:42:51 accel -- common/autotest_common.sh@967 -- # kill 3261813 00:05:20.903 23:42:51 accel -- common/autotest_common.sh@972 -- # wait 3261813 00:05:21.505 23:42:51 accel -- accel/accel.sh@76 -- # trap - ERR 00:05:21.505 23:42:51 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:05:21.505 23:42:51 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:21.505 23:42:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.505 23:42:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.505 23:42:51 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:05:21.505 23:42:51 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:21.505 23:42:51 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:21.505 23:42:51 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.505 23:42:51 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.505 23:42:51 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.505 23:42:51 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.505 23:42:51 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.505 23:42:51 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:21.505 23:42:51 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:21.505 23:42:51 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.505 23:42:51 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:05:21.505 23:42:51 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:21.505 23:42:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:21.505 23:42:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.505 23:42:51 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.505 ************************************ 00:05:21.505 START TEST accel_missing_filename 00:05:21.505 ************************************ 00:05:21.505 23:42:51 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:05:21.505 23:42:51 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:05:21.505 23:42:51 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:21.505 23:42:51 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:21.505 23:42:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.505 23:42:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:21.505 23:42:51 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:21.505 23:42:51 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:05:21.505 23:42:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:21.505 23:42:51 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:05:21.505 23:42:51 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:21.505 23:42:51 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:21.505 23:42:51 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.505 23:42:51 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.505 23:42:51 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:21.505 23:42:51 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:05:21.505 23:42:51 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:05:21.505 [2024-07-24 23:42:51.913501] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:21.505 [2024-07-24 23:42:51.913575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3261985 ] 00:05:21.505 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.505 [2024-07-24 23:42:51.975952] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.505 [2024-07-24 23:42:52.096376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.763 [2024-07-24 23:42:52.157129] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:21.763 [2024-07-24 23:42:52.241362] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:21.763 A filename is required. 00:05:21.763 23:42:52 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:05:21.763 23:42:52 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:21.763 23:42:52 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:05:21.763 23:42:52 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:05:21.763 23:42:52 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:05:21.763 23:42:52 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:21.763 00:05:21.763 real 0m0.472s 00:05:21.763 user 0m0.358s 00:05:21.763 sys 0m0.147s 00:05:21.763 23:42:52 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.763 23:42:52 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:05:21.763 ************************************ 00:05:21.763 END TEST accel_missing_filename 00:05:21.763 ************************************ 00:05:22.020 23:42:52 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:22.020 23:42:52 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:22.020 23:42:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.020 23:42:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.020 ************************************ 00:05:22.020 START TEST accel_compress_verify 00:05:22.020 ************************************ 00:05:22.020 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:22.020 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:05:22.021 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:22.021 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:22.021 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.021 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:22.021 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.021 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:22.021 23:42:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:22.021 23:42:52 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:22.021 23:42:52 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.021 23:42:52 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.021 23:42:52 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.021 23:42:52 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.021 23:42:52 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.021 23:42:52 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:22.021 23:42:52 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:05:22.021 [2024-07-24 23:42:52.432668] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:22.021 [2024-07-24 23:42:52.432732] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262011 ] 00:05:22.021 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.021 [2024-07-24 23:42:52.494331] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.021 [2024-07-24 23:42:52.614811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.279 [2024-07-24 23:42:52.674361] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:22.279 [2024-07-24 23:42:52.758579] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:05:22.279 00:05:22.279 Compression does not support the verify option, aborting. 00:05:22.279 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:05:22.279 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.279 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:05:22.279 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:05:22.279 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:05:22.279 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.279 00:05:22.279 real 0m0.467s 00:05:22.279 user 0m0.360s 00:05:22.279 sys 0m0.140s 00:05:22.279 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.279 23:42:52 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:05:22.279 ************************************ 00:05:22.279 END TEST accel_compress_verify 00:05:22.279 ************************************ 00:05:22.537 23:42:52 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:22.537 23:42:52 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:22.537 23:42:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.537 23:42:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.537 ************************************ 00:05:22.537 START TEST accel_wrong_workload 00:05:22.537 ************************************ 00:05:22.537 23:42:52 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:05:22.537 23:42:52 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:05:22.537 23:42:52 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:22.537 23:42:52 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:22.537 23:42:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.537 23:42:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:22.537 23:42:52 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.537 23:42:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:05:22.537 23:42:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:22.537 23:42:52 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:22.537 23:42:52 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.537 23:42:52 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.537 23:42:52 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.537 23:42:52 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.537 23:42:52 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.537 23:42:52 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:22.537 23:42:52 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:22.537 Unsupported workload type: foobar 00:05:22.537 [2024-07-24 23:42:52.944095] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:22.537 accel_perf options: 00:05:22.537 [-h help message] 00:05:22.537 [-q queue depth per core] 00:05:22.537 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:22.537 [-T number of threads per core 00:05:22.537 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:22.537 [-t time in seconds] 00:05:22.537 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:22.537 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:22.537 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:22.537 [-l for compress/decompress workloads, name of uncompressed input file 00:05:22.537 [-S for crc32c workload, use this seed value (default 0) 00:05:22.537 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:22.537 [-f for fill workload, use this BYTE value (default 255) 00:05:22.537 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:22.537 [-y verify result if this switch is on] 00:05:22.537 [-a tasks to allocate per core (default: same value as -q)] 00:05:22.537 Can be used to spread operations across a wider range of memory. 00:05:22.537 23:42:52 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:05:22.537 23:42:52 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.537 23:42:52 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.537 23:42:52 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.537 00:05:22.537 real 0m0.023s 00:05:22.537 user 0m0.011s 00:05:22.537 sys 0m0.012s 00:05:22.537 23:42:52 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.537 23:42:52 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:05:22.537 ************************************ 00:05:22.537 END TEST accel_wrong_workload 00:05:22.537 ************************************ 00:05:22.537 Error: writing output failed: Broken pipe 00:05:22.537 23:42:52 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:22.537 23:42:52 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:05:22.537 23:42:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.537 23:42:52 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.537 ************************************ 00:05:22.537 START TEST accel_negative_buffers 00:05:22.537 ************************************ 00:05:22.537 23:42:52 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:22.537 23:42:52 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:05:22.537 23:42:52 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:22.537 23:42:52 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:05:22.537 23:42:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.537 23:42:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:05:22.537 23:42:52 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.537 23:42:52 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:05:22.537 23:42:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:22.537 23:42:52 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:22.537 23:42:52 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.537 23:42:52 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.537 23:42:52 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.537 23:42:52 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.537 23:42:52 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.537 23:42:52 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:22.537 23:42:52 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:22.537 -x option must be non-negative. 00:05:22.537 [2024-07-24 23:42:53.010371] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:22.537 accel_perf options: 00:05:22.537 [-h help message] 00:05:22.537 [-q queue depth per core] 00:05:22.537 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:22.537 [-T number of threads per core 00:05:22.537 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:22.537 [-t time in seconds] 00:05:22.537 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:22.537 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:05:22.537 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:22.537 [-l for compress/decompress workloads, name of uncompressed input file 00:05:22.537 [-S for crc32c workload, use this seed value (default 0) 00:05:22.537 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:22.537 [-f for fill workload, use this BYTE value (default 255) 00:05:22.537 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:22.537 [-y verify result if this switch is on] 00:05:22.537 [-a tasks to allocate per core (default: same value as -q)] 00:05:22.537 Can be used to spread operations across a wider range of memory. 00:05:22.537 23:42:53 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:05:22.537 23:42:53 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.537 23:42:53 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.537 23:42:53 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.538 00:05:22.538 real 0m0.023s 00:05:22.538 user 0m0.013s 00:05:22.538 sys 0m0.010s 00:05:22.538 23:42:53 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.538 23:42:53 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:05:22.538 ************************************ 00:05:22.538 END TEST accel_negative_buffers 00:05:22.538 ************************************ 00:05:22.538 Error: writing output failed: Broken pipe 00:05:22.538 23:42:53 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:22.538 23:42:53 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:22.538 23:42:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.538 23:42:53 accel -- common/autotest_common.sh@10 -- # set +x 00:05:22.538 ************************************ 00:05:22.538 START TEST accel_crc32c 00:05:22.538 ************************************ 00:05:22.538 23:42:53 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:22.538 23:42:53 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:22.538 23:42:53 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:22.538 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.538 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.538 23:42:53 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:22.538 23:42:53 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:22.538 23:42:53 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:22.538 23:42:53 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.538 23:42:53 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.538 23:42:53 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.538 23:42:53 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.538 23:42:53 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.538 23:42:53 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:22.538 23:42:53 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:22.538 [2024-07-24 23:42:53.069416] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:22.538 [2024-07-24 23:42:53.069475] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262196 ] 00:05:22.538 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.538 [2024-07-24 23:42:53.131316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.795 [2024-07-24 23:42:53.252402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:22.795 23:42:53 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:24.166 23:42:54 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:24.166 00:05:24.166 real 0m1.456s 00:05:24.166 user 0m1.319s 00:05:24.166 sys 0m0.140s 00:05:24.166 23:42:54 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:24.166 23:42:54 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:24.166 ************************************ 00:05:24.166 END TEST accel_crc32c 00:05:24.166 ************************************ 00:05:24.166 23:42:54 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:24.166 23:42:54 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:24.166 23:42:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.166 23:42:54 accel -- common/autotest_common.sh@10 -- # set +x 00:05:24.166 ************************************ 00:05:24.166 START TEST accel_crc32c_C2 00:05:24.166 ************************************ 00:05:24.166 23:42:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:24.166 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:24.166 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:24.166 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.166 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:24.166 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.166 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:24.166 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:24.166 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.167 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.167 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.167 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.167 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.167 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:24.167 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:24.167 [2024-07-24 23:42:54.570745] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:24.167 [2024-07-24 23:42:54.570812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262352 ] 00:05:24.167 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.167 [2024-07-24 23:42:54.632002] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.167 [2024-07-24 23:42:54.750074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.425 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:24.426 23:42:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.799 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:25.800 23:42:56 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.800 00:05:25.800 real 0m1.475s 00:05:25.800 user 0m1.328s 00:05:25.800 sys 0m0.149s 00:05:25.800 23:42:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.800 23:42:56 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:25.800 ************************************ 00:05:25.800 END TEST accel_crc32c_C2 00:05:25.800 ************************************ 00:05:25.800 23:42:56 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:25.800 23:42:56 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:25.800 23:42:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.800 23:42:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.800 ************************************ 00:05:25.800 START TEST accel_copy 00:05:25.800 ************************************ 00:05:25.800 23:42:56 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:25.800 [2024-07-24 23:42:56.093008] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:25.800 [2024-07-24 23:42:56.093073] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262513 ] 00:05:25.800 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.800 [2024-07-24 23:42:56.158314] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.800 [2024-07-24 23:42:56.280141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:25.800 23:42:56 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.171 23:42:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.171 23:42:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.171 23:42:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:27.172 23:42:57 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:27.172 00:05:27.172 real 0m1.476s 00:05:27.172 user 0m1.327s 00:05:27.172 sys 0m0.151s 00:05:27.172 23:42:57 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.172 23:42:57 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:27.172 ************************************ 00:05:27.172 END TEST accel_copy 00:05:27.172 ************************************ 00:05:27.172 23:42:57 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.172 23:42:57 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:27.172 23:42:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.172 23:42:57 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.172 ************************************ 00:05:27.172 START TEST accel_fill 00:05:27.172 ************************************ 00:05:27.172 23:42:57 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.172 23:42:57 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:27.172 23:42:57 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:27.172 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.172 23:42:57 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.172 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.172 23:42:57 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:27.172 23:42:57 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:27.172 23:42:57 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.172 23:42:57 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.172 23:42:57 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.172 23:42:57 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.172 23:42:57 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.172 23:42:57 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:27.172 23:42:57 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:27.172 [2024-07-24 23:42:57.613620] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:27.172 [2024-07-24 23:42:57.613687] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262782 ] 00:05:27.172 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.172 [2024-07-24 23:42:57.681843] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.430 [2024-07-24 23:42:57.800371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:27.430 23:42:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.803 23:42:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.803 23:42:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.803 23:42:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.803 23:42:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.803 23:42:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.803 23:42:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.803 23:42:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.803 23:42:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.803 23:42:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.803 23:42:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.803 23:42:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.803 23:42:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:28.804 23:42:59 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.804 00:05:28.804 real 0m1.475s 00:05:28.804 user 0m1.337s 00:05:28.804 sys 0m0.139s 00:05:28.804 23:42:59 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.804 23:42:59 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:28.804 ************************************ 00:05:28.804 END TEST accel_fill 00:05:28.804 ************************************ 00:05:28.804 23:42:59 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:28.804 23:42:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:28.804 23:42:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.804 23:42:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:28.804 ************************************ 00:05:28.804 START TEST accel_copy_crc32c 00:05:28.804 ************************************ 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:28.804 [2024-07-24 23:42:59.134660] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:28.804 [2024-07-24 23:42:59.134725] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3262946 ] 00:05:28.804 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.804 [2024-07-24 23:42:59.200266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.804 [2024-07-24 23:42:59.323047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:28.804 23:42:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.176 00:05:30.176 real 0m1.491s 00:05:30.176 user 0m1.352s 00:05:30.176 sys 0m0.141s 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.176 23:43:00 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:30.176 ************************************ 00:05:30.176 END TEST accel_copy_crc32c 00:05:30.176 ************************************ 00:05:30.176 23:43:00 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:30.176 23:43:00 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:30.176 23:43:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.176 23:43:00 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.176 ************************************ 00:05:30.176 START TEST accel_copy_crc32c_C2 00:05:30.176 ************************************ 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:30.176 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:30.176 [2024-07-24 23:43:00.676770] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:30.176 [2024-07-24 23:43:00.676836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263105 ] 00:05:30.176 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.176 [2024-07-24 23:43:00.740564] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.434 [2024-07-24 23:43:00.865144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:30.434 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:30.435 23:43:00 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.806 00:05:31.806 real 0m1.488s 00:05:31.806 user 0m1.343s 00:05:31.806 sys 0m0.148s 00:05:31.806 23:43:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.807 23:43:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:31.807 ************************************ 00:05:31.807 END TEST accel_copy_crc32c_C2 00:05:31.807 ************************************ 00:05:31.807 23:43:02 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:31.807 23:43:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:31.807 23:43:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.807 23:43:02 accel -- common/autotest_common.sh@10 -- # set +x 00:05:31.807 ************************************ 00:05:31.807 START TEST accel_dualcast 00:05:31.807 ************************************ 00:05:31.807 23:43:02 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:31.807 23:43:02 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:31.807 23:43:02 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:31.807 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:31.807 23:43:02 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:31.807 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:31.807 23:43:02 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:31.807 23:43:02 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:31.807 23:43:02 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.807 23:43:02 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.807 23:43:02 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.807 23:43:02 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.807 23:43:02 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.807 23:43:02 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:31.807 23:43:02 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:31.807 [2024-07-24 23:43:02.208602] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:31.807 [2024-07-24 23:43:02.208667] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263376 ] 00:05:31.807 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.807 [2024-07-24 23:43:02.274848] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.807 [2024-07-24 23:43:02.397418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:32.065 23:43:02 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:33.438 23:43:03 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:33.438 00:05:33.438 real 0m1.486s 00:05:33.438 user 0m1.352s 00:05:33.438 sys 0m0.136s 00:05:33.438 23:43:03 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:33.438 23:43:03 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:33.438 ************************************ 00:05:33.438 END TEST accel_dualcast 00:05:33.438 ************************************ 00:05:33.438 23:43:03 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:33.438 23:43:03 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:33.438 23:43:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.438 23:43:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:33.438 ************************************ 00:05:33.438 START TEST accel_compare 00:05:33.438 ************************************ 00:05:33.438 23:43:03 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:33.438 [2024-07-24 23:43:03.738845] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:33.438 [2024-07-24 23:43:03.738912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263540 ] 00:05:33.438 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.438 [2024-07-24 23:43:03.805138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.438 [2024-07-24 23:43:03.927819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.438 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:33.439 23:43:03 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.812 23:43:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.813 23:43:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.813 23:43:05 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:34.813 23:43:05 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:34.813 23:43:05 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:34.813 23:43:05 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:34.813 23:43:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.813 23:43:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:34.813 23:43:05 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.813 00:05:34.813 real 0m1.492s 00:05:34.813 user 0m1.350s 00:05:34.813 sys 0m0.144s 00:05:34.813 23:43:05 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.813 23:43:05 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:34.813 ************************************ 00:05:34.813 END TEST accel_compare 00:05:34.813 ************************************ 00:05:34.813 23:43:05 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:34.813 23:43:05 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:34.813 23:43:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.813 23:43:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.813 ************************************ 00:05:34.813 START TEST accel_xor 00:05:34.813 ************************************ 00:05:34.813 23:43:05 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:34.813 23:43:05 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:34.813 23:43:05 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:34.813 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:34.813 23:43:05 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:34.813 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:34.813 23:43:05 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:34.813 23:43:05 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:34.813 23:43:05 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:34.813 23:43:05 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:34.813 23:43:05 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.813 23:43:05 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.813 23:43:05 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:34.813 23:43:05 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:34.813 23:43:05 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:34.813 [2024-07-24 23:43:05.276084] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:34.813 [2024-07-24 23:43:05.276150] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263696 ] 00:05:34.813 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.813 [2024-07-24 23:43:05.339898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.071 [2024-07-24 23:43:05.463010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.071 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:35.072 23:43:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.445 23:43:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.445 23:43:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.445 23:43:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.445 23:43:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.445 23:43:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.445 23:43:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.445 23:43:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.445 23:43:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.445 23:43:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:36.446 00:05:36.446 real 0m1.486s 00:05:36.446 user 0m1.337s 00:05:36.446 sys 0m0.150s 00:05:36.446 23:43:06 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.446 23:43:06 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:36.446 ************************************ 00:05:36.446 END TEST accel_xor 00:05:36.446 ************************************ 00:05:36.446 23:43:06 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:36.446 23:43:06 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:36.446 23:43:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.446 23:43:06 accel -- common/autotest_common.sh@10 -- # set +x 00:05:36.446 ************************************ 00:05:36.446 START TEST accel_xor 00:05:36.446 ************************************ 00:05:36.446 23:43:06 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:36.446 23:43:06 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:36.446 [2024-07-24 23:43:06.810667] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:36.446 [2024-07-24 23:43:06.810733] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3263971 ] 00:05:36.446 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.446 [2024-07-24 23:43:06.876884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.446 [2024-07-24 23:43:06.999398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.446 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:36.704 23:43:07 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:38.078 23:43:08 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:38.078 00:05:38.078 real 0m1.485s 00:05:38.078 user 0m1.342s 00:05:38.078 sys 0m0.145s 00:05:38.078 23:43:08 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.078 23:43:08 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:38.078 ************************************ 00:05:38.078 END TEST accel_xor 00:05:38.078 ************************************ 00:05:38.078 23:43:08 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:38.078 23:43:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:38.078 23:43:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.078 23:43:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:38.078 ************************************ 00:05:38.078 START TEST accel_dif_verify 00:05:38.078 ************************************ 00:05:38.078 23:43:08 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:38.078 [2024-07-24 23:43:08.337015] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:38.078 [2024-07-24 23:43:08.337080] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264127 ] 00:05:38.078 EAL: No free 2048 kB hugepages reported on node 1 00:05:38.078 [2024-07-24 23:43:08.402620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.078 [2024-07-24 23:43:08.525486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:38.078 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.079 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.079 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:38.079 23:43:08 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:38.079 23:43:08 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:38.079 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:38.079 23:43:08 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.461 23:43:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:39.462 23:43:09 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:39.462 00:05:39.462 real 0m1.489s 00:05:39.462 user 0m1.349s 00:05:39.462 sys 0m0.144s 00:05:39.462 23:43:09 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.462 23:43:09 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:39.462 ************************************ 00:05:39.462 END TEST accel_dif_verify 00:05:39.462 ************************************ 00:05:39.462 23:43:09 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:39.462 23:43:09 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:39.462 23:43:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.462 23:43:09 accel -- common/autotest_common.sh@10 -- # set +x 00:05:39.462 ************************************ 00:05:39.462 START TEST accel_dif_generate 00:05:39.462 ************************************ 00:05:39.462 23:43:09 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:39.462 23:43:09 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:39.462 23:43:09 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:39.462 23:43:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.462 23:43:09 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:39.462 23:43:09 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.462 23:43:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:39.462 23:43:09 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:39.462 23:43:09 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:39.462 23:43:09 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:39.462 23:43:09 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.462 23:43:09 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.462 23:43:09 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:39.462 23:43:09 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:39.462 23:43:09 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:39.462 [2024-07-24 23:43:09.876053] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:39.462 [2024-07-24 23:43:09.876127] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264307 ] 00:05:39.462 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.462 [2024-07-24 23:43:09.939055] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.462 [2024-07-24 23:43:10.064129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:39.750 23:43:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:41.125 23:43:11 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:41.125 00:05:41.125 real 0m1.490s 00:05:41.125 user 0m1.343s 00:05:41.125 sys 0m0.151s 00:05:41.125 23:43:11 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.125 23:43:11 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:41.125 ************************************ 00:05:41.125 END TEST accel_dif_generate 00:05:41.125 ************************************ 00:05:41.125 23:43:11 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:41.125 23:43:11 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:41.125 23:43:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.125 23:43:11 accel -- common/autotest_common.sh@10 -- # set +x 00:05:41.125 ************************************ 00:05:41.125 START TEST accel_dif_generate_copy 00:05:41.125 ************************************ 00:05:41.125 23:43:11 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:41.125 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:41.125 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:41.125 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.125 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:41.125 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.125 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:41.125 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:41.126 [2024-07-24 23:43:11.410495] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:41.126 [2024-07-24 23:43:11.410562] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264560 ] 00:05:41.126 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.126 [2024-07-24 23:43:11.478240] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.126 [2024-07-24 23:43:11.601688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:41.126 23:43:11 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.500 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.500 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.500 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.500 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.500 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:42.501 00:05:42.501 real 0m1.484s 00:05:42.501 user 0m1.336s 00:05:42.501 sys 0m0.150s 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.501 23:43:12 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:42.501 ************************************ 00:05:42.501 END TEST accel_dif_generate_copy 00:05:42.501 ************************************ 00:05:42.501 23:43:12 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:42.501 23:43:12 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:42.501 23:43:12 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:42.501 23:43:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.501 23:43:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:42.501 ************************************ 00:05:42.501 START TEST accel_comp 00:05:42.501 ************************************ 00:05:42.501 23:43:12 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:42.501 23:43:12 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:42.501 23:43:12 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:42.501 23:43:12 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.501 23:43:12 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:42.501 23:43:12 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.501 23:43:12 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:42.501 23:43:12 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:42.501 23:43:12 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:42.501 23:43:12 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:42.501 23:43:12 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.501 23:43:12 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.501 23:43:12 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:42.501 23:43:12 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:42.501 23:43:12 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:42.501 [2024-07-24 23:43:12.942755] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:42.501 [2024-07-24 23:43:12.942820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264724 ] 00:05:42.501 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.501 [2024-07-24 23:43:13.006801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.759 [2024-07-24 23:43:13.129219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.759 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.759 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.759 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.759 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.759 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.759 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.759 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:42.760 23:43:13 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.131 23:43:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.131 23:43:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:44.132 23:43:14 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:44.132 00:05:44.132 real 0m1.494s 00:05:44.132 user 0m1.346s 00:05:44.132 sys 0m0.151s 00:05:44.132 23:43:14 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.132 23:43:14 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:44.132 ************************************ 00:05:44.132 END TEST accel_comp 00:05:44.132 ************************************ 00:05:44.132 23:43:14 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:44.132 23:43:14 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:44.132 23:43:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.132 23:43:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:44.132 ************************************ 00:05:44.132 START TEST accel_decomp 00:05:44.132 ************************************ 00:05:44.132 23:43:14 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:44.132 [2024-07-24 23:43:14.480551] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:44.132 [2024-07-24 23:43:14.480625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3264978 ] 00:05:44.132 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.132 [2024-07-24 23:43:14.543056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.132 [2024-07-24 23:43:14.665845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:44.132 23:43:14 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:44.133 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:44.133 23:43:14 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.505 23:43:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.505 23:43:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:45.506 23:43:15 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:45.506 00:05:45.506 real 0m1.492s 00:05:45.506 user 0m1.363s 00:05:45.506 sys 0m0.132s 00:05:45.506 23:43:15 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.506 23:43:15 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:45.506 ************************************ 00:05:45.506 END TEST accel_decomp 00:05:45.506 ************************************ 00:05:45.506 23:43:15 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:45.506 23:43:15 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:45.506 23:43:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.506 23:43:15 accel -- common/autotest_common.sh@10 -- # set +x 00:05:45.506 ************************************ 00:05:45.506 START TEST accel_decomp_full 00:05:45.506 ************************************ 00:05:45.506 23:43:16 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:45.506 23:43:16 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:45.506 23:43:16 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:45.506 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.506 23:43:16 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:45.506 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.506 23:43:16 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:45.506 23:43:16 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:45.506 23:43:16 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:45.506 23:43:16 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:45.506 23:43:16 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.506 23:43:16 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.506 23:43:16 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:45.506 23:43:16 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:45.506 23:43:16 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:45.506 [2024-07-24 23:43:16.018578] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:45.506 [2024-07-24 23:43:16.018646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265154 ] 00:05:45.506 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.506 [2024-07-24 23:43:16.080824] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.765 [2024-07-24 23:43:16.204197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:45.765 23:43:16 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:47.138 23:43:17 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:47.138 00:05:47.138 real 0m1.506s 00:05:47.138 user 0m1.363s 00:05:47.138 sys 0m0.146s 00:05:47.138 23:43:17 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.138 23:43:17 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:47.138 ************************************ 00:05:47.138 END TEST accel_decomp_full 00:05:47.138 ************************************ 00:05:47.139 23:43:17 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:47.139 23:43:17 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:47.139 23:43:17 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.139 23:43:17 accel -- common/autotest_common.sh@10 -- # set +x 00:05:47.139 ************************************ 00:05:47.139 START TEST accel_decomp_mcore 00:05:47.139 ************************************ 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:47.139 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:47.139 [2024-07-24 23:43:17.574618] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:47.139 [2024-07-24 23:43:17.574684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265309 ] 00:05:47.139 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.139 [2024-07-24 23:43:17.638411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:47.397 [2024-07-24 23:43:17.764716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.397 [2024-07-24 23:43:17.764768] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:47.397 [2024-07-24 23:43:17.764819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:47.397 [2024-07-24 23:43:17.764823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:47.397 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.398 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.398 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.398 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:47.398 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.398 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.398 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.398 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.398 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.398 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.398 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:47.398 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:47.398 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:47.398 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:47.398 23:43:17 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:48.772 00:05:48.772 real 0m1.489s 00:05:48.772 user 0m4.783s 00:05:48.772 sys 0m0.148s 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.772 23:43:19 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:48.772 ************************************ 00:05:48.772 END TEST accel_decomp_mcore 00:05:48.772 ************************************ 00:05:48.773 23:43:19 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:48.773 23:43:19 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:48.773 23:43:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.773 23:43:19 accel -- common/autotest_common.sh@10 -- # set +x 00:05:48.773 ************************************ 00:05:48.773 START TEST accel_decomp_full_mcore 00:05:48.773 ************************************ 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:48.773 [2024-07-24 23:43:19.107378] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:48.773 [2024-07-24 23:43:19.107437] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265591 ] 00:05:48.773 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.773 [2024-07-24 23:43:19.171653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:48.773 [2024-07-24 23:43:19.296689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.773 [2024-07-24 23:43:19.296744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.773 [2024-07-24 23:43:19.296799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.773 [2024-07-24 23:43:19.296802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.773 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.774 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.774 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:48.774 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.774 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.774 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.774 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.774 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.774 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.774 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:48.774 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:48.774 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:48.774 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:48.774 23:43:19 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.147 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.147 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.147 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.147 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.147 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:50.148 00:05:50.148 real 0m1.517s 00:05:50.148 user 0m4.877s 00:05:50.148 sys 0m0.158s 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.148 23:43:20 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:50.148 ************************************ 00:05:50.148 END TEST accel_decomp_full_mcore 00:05:50.148 ************************************ 00:05:50.148 23:43:20 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:50.148 23:43:20 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:50.148 23:43:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.148 23:43:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:50.148 ************************************ 00:05:50.148 START TEST accel_decomp_mthread 00:05:50.148 ************************************ 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:50.148 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:50.148 [2024-07-24 23:43:20.680432] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:50.148 [2024-07-24 23:43:20.680503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265746 ] 00:05:50.148 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.148 [2024-07-24 23:43:20.743663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.407 [2024-07-24 23:43:20.865994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:50.407 23:43:20 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:51.781 00:05:51.781 real 0m1.490s 00:05:51.781 user 0m1.347s 00:05:51.781 sys 0m0.146s 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.781 23:43:22 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:51.781 ************************************ 00:05:51.781 END TEST accel_decomp_mthread 00:05:51.781 ************************************ 00:05:51.781 23:43:22 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:51.781 23:43:22 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:51.781 23:43:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.781 23:43:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:51.781 ************************************ 00:05:51.781 START TEST accel_decomp_full_mthread 00:05:51.781 ************************************ 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:51.781 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:51.781 [2024-07-24 23:43:22.214907] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:51.781 [2024-07-24 23:43:22.214973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3265910 ] 00:05:51.781 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.781 [2024-07-24 23:43:22.275964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.040 [2024-07-24 23:43:22.398940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:52.040 23:43:22 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:53.414 00:05:53.414 real 0m1.520s 00:05:53.414 user 0m1.377s 00:05:53.414 sys 0m0.145s 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.414 23:43:23 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:53.414 ************************************ 00:05:53.414 END TEST accel_decomp_full_mthread 00:05:53.414 ************************************ 00:05:53.414 23:43:23 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:53.414 23:43:23 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:53.414 23:43:23 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:53.414 23:43:23 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:53.414 23:43:23 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:53.414 23:43:23 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.414 23:43:23 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:53.414 23:43:23 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.414 23:43:23 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:53.414 23:43:23 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:53.414 23:43:23 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:53.414 23:43:23 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:53.414 23:43:23 accel -- accel/accel.sh@41 -- # jq -r . 00:05:53.414 ************************************ 00:05:53.414 START TEST accel_dif_functional_tests 00:05:53.414 ************************************ 00:05:53.414 23:43:23 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:05:53.414 [2024-07-24 23:43:23.806426] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:53.414 [2024-07-24 23:43:23.806495] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266182 ] 00:05:53.414 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.414 [2024-07-24 23:43:23.867978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:53.414 [2024-07-24 23:43:23.992829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.414 [2024-07-24 23:43:23.992884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.414 [2024-07-24 23:43:23.992888] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.672 00:05:53.672 00:05:53.672 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.672 http://cunit.sourceforge.net/ 00:05:53.672 00:05:53.672 00:05:53.672 Suite: accel_dif 00:05:53.672 Test: verify: DIF generated, GUARD check ...passed 00:05:53.672 Test: verify: DIF generated, APPTAG check ...passed 00:05:53.672 Test: verify: DIF generated, REFTAG check ...passed 00:05:53.673 Test: verify: DIF not generated, GUARD check ...[2024-07-24 23:43:24.095390] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:53.673 passed 00:05:53.673 Test: verify: DIF not generated, APPTAG check ...[2024-07-24 23:43:24.095468] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:53.673 passed 00:05:53.673 Test: verify: DIF not generated, REFTAG check ...[2024-07-24 23:43:24.095508] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:53.673 passed 00:05:53.673 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:53.673 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-24 23:43:24.095582] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:53.673 passed 00:05:53.673 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:53.673 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:53.673 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:53.673 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-24 23:43:24.095743] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:53.673 passed 00:05:53.673 Test: verify copy: DIF generated, GUARD check ...passed 00:05:53.673 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:53.673 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:53.673 Test: verify copy: DIF not generated, GUARD check ...[2024-07-24 23:43:24.095922] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:53.673 passed 00:05:53.673 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-24 23:43:24.095964] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:53.673 passed 00:05:53.673 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-24 23:43:24.096011] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:53.673 passed 00:05:53.673 Test: generate copy: DIF generated, GUARD check ...passed 00:05:53.673 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:53.673 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:53.673 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:53.673 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:53.673 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:53.673 Test: generate copy: iovecs-len validate ...[2024-07-24 23:43:24.096274] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:53.673 passed 00:05:53.673 Test: generate copy: buffer alignment validate ...passed 00:05:53.673 00:05:53.673 Run Summary: Type Total Ran Passed Failed Inactive 00:05:53.673 suites 1 1 n/a 0 0 00:05:53.673 tests 26 26 26 0 0 00:05:53.673 asserts 115 115 115 0 n/a 00:05:53.673 00:05:53.673 Elapsed time = 0.005 seconds 00:05:53.931 00:05:53.931 real 0m0.606s 00:05:53.931 user 0m0.913s 00:05:53.931 sys 0m0.193s 00:05:53.931 23:43:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.931 23:43:24 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:53.931 ************************************ 00:05:53.931 END TEST accel_dif_functional_tests 00:05:53.931 ************************************ 00:05:53.931 00:05:53.931 real 0m33.660s 00:05:53.931 user 0m37.129s 00:05:53.931 sys 0m4.594s 00:05:53.931 23:43:24 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.931 23:43:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:53.931 ************************************ 00:05:53.931 END TEST accel 00:05:53.931 ************************************ 00:05:53.931 23:43:24 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:53.931 23:43:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.931 23:43:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.931 23:43:24 -- common/autotest_common.sh@10 -- # set +x 00:05:53.931 ************************************ 00:05:53.931 START TEST accel_rpc 00:05:53.931 ************************************ 00:05:53.931 23:43:24 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:05:53.931 * Looking for test storage... 00:05:53.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:53.931 23:43:24 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:53.931 23:43:24 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=3266253 00:05:53.931 23:43:24 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:53.931 23:43:24 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 3266253 00:05:53.931 23:43:24 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 3266253 ']' 00:05:53.931 23:43:24 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.931 23:43:24 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.931 23:43:24 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.931 23:43:24 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.931 23:43:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.931 [2024-07-24 23:43:24.542411] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:53.931 [2024-07-24 23:43:24.542505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266253 ] 00:05:54.189 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.189 [2024-07-24 23:43:24.603794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.189 [2024-07-24 23:43:24.725584] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.189 23:43:24 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.189 23:43:24 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:54.189 23:43:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:54.189 23:43:24 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:54.189 23:43:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:54.189 23:43:24 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:54.189 23:43:24 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:54.189 23:43:24 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.189 23:43:24 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.189 23:43:24 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:54.447 ************************************ 00:05:54.447 START TEST accel_assign_opcode 00:05:54.447 ************************************ 00:05:54.447 23:43:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:54.447 23:43:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:54.447 23:43:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.447 23:43:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:54.447 [2024-07-24 23:43:24.810251] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:54.447 23:43:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.447 23:43:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:54.447 23:43:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.447 23:43:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:54.447 [2024-07-24 23:43:24.818267] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:54.447 23:43:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.447 23:43:24 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:54.447 23:43:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.447 23:43:24 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:54.705 23:43:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.705 23:43:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:54.705 23:43:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.705 23:43:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:54.705 23:43:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:54.705 23:43:25 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:54.705 23:43:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.705 software 00:05:54.705 00:05:54.705 real 0m0.304s 00:05:54.705 user 0m0.040s 00:05:54.705 sys 0m0.006s 00:05:54.705 23:43:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.705 23:43:25 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:54.705 ************************************ 00:05:54.705 END TEST accel_assign_opcode 00:05:54.705 ************************************ 00:05:54.705 23:43:25 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 3266253 00:05:54.705 23:43:25 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 3266253 ']' 00:05:54.705 23:43:25 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 3266253 00:05:54.705 23:43:25 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:54.705 23:43:25 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.705 23:43:25 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3266253 00:05:54.705 23:43:25 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.705 23:43:25 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.705 23:43:25 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3266253' 00:05:54.705 killing process with pid 3266253 00:05:54.705 23:43:25 accel_rpc -- common/autotest_common.sh@967 -- # kill 3266253 00:05:54.705 23:43:25 accel_rpc -- common/autotest_common.sh@972 -- # wait 3266253 00:05:55.270 00:05:55.270 real 0m1.198s 00:05:55.270 user 0m1.155s 00:05:55.270 sys 0m0.431s 00:05:55.270 23:43:25 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.270 23:43:25 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:55.270 ************************************ 00:05:55.270 END TEST accel_rpc 00:05:55.270 ************************************ 00:05:55.270 23:43:25 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:55.270 23:43:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.270 23:43:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.270 23:43:25 -- common/autotest_common.sh@10 -- # set +x 00:05:55.270 ************************************ 00:05:55.270 START TEST app_cmdline 00:05:55.270 ************************************ 00:05:55.270 23:43:25 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:55.270 * Looking for test storage... 00:05:55.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:55.270 23:43:25 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:55.270 23:43:25 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3266500 00:05:55.270 23:43:25 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:55.270 23:43:25 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3266500 00:05:55.270 23:43:25 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 3266500 ']' 00:05:55.270 23:43:25 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.270 23:43:25 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.270 23:43:25 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.270 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.270 23:43:25 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.270 23:43:25 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:55.270 [2024-07-24 23:43:25.781013] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:55.270 [2024-07-24 23:43:25.781103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3266500 ] 00:05:55.270 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.270 [2024-07-24 23:43:25.843456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.528 [2024-07-24 23:43:25.965402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.461 23:43:26 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.461 23:43:26 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:56.461 23:43:26 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:56.461 { 00:05:56.461 "version": "SPDK v24.09-pre git sha1 a1abc21f8", 00:05:56.461 "fields": { 00:05:56.461 "major": 24, 00:05:56.461 "minor": 9, 00:05:56.461 "patch": 0, 00:05:56.461 "suffix": "-pre", 00:05:56.461 "commit": "a1abc21f8" 00:05:56.461 } 00:05:56.461 } 00:05:56.461 23:43:26 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:56.461 23:43:26 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:56.461 23:43:26 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:56.461 23:43:26 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:56.461 23:43:26 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:56.461 23:43:26 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.461 23:43:26 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:56.461 23:43:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:56.461 23:43:26 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:56.461 23:43:26 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.461 23:43:26 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:56.461 23:43:26 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:56.461 23:43:26 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:56.461 23:43:26 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:56.461 23:43:26 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:56.461 23:43:27 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:56.461 23:43:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.461 23:43:27 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:56.461 23:43:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.461 23:43:27 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:56.461 23:43:27 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:56.461 23:43:27 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:56.461 23:43:27 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:56.461 23:43:27 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:56.720 request: 00:05:56.720 { 00:05:56.720 "method": "env_dpdk_get_mem_stats", 00:05:56.720 "req_id": 1 00:05:56.720 } 00:05:56.720 Got JSON-RPC error response 00:05:56.720 response: 00:05:56.720 { 00:05:56.720 "code": -32601, 00:05:56.720 "message": "Method not found" 00:05:56.720 } 00:05:56.720 23:43:27 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:56.720 23:43:27 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:56.720 23:43:27 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:56.720 23:43:27 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:56.720 23:43:27 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3266500 00:05:56.720 23:43:27 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 3266500 ']' 00:05:56.720 23:43:27 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 3266500 00:05:56.720 23:43:27 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:56.720 23:43:27 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.720 23:43:27 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3266500 00:05:56.720 23:43:27 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:56.720 23:43:27 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:56.720 23:43:27 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3266500' 00:05:56.720 killing process with pid 3266500 00:05:56.720 23:43:27 app_cmdline -- common/autotest_common.sh@967 -- # kill 3266500 00:05:56.720 23:43:27 app_cmdline -- common/autotest_common.sh@972 -- # wait 3266500 00:05:57.286 00:05:57.286 real 0m2.087s 00:05:57.286 user 0m2.607s 00:05:57.286 sys 0m0.492s 00:05:57.286 23:43:27 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.286 23:43:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:57.286 ************************************ 00:05:57.286 END TEST app_cmdline 00:05:57.286 ************************************ 00:05:57.286 23:43:27 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:57.286 23:43:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.286 23:43:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.286 23:43:27 -- common/autotest_common.sh@10 -- # set +x 00:05:57.286 ************************************ 00:05:57.286 START TEST version 00:05:57.286 ************************************ 00:05:57.286 23:43:27 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:57.286 * Looking for test storage... 00:05:57.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:57.286 23:43:27 version -- app/version.sh@17 -- # get_header_version major 00:05:57.286 23:43:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:57.286 23:43:27 version -- app/version.sh@14 -- # cut -f2 00:05:57.286 23:43:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.286 23:43:27 version -- app/version.sh@17 -- # major=24 00:05:57.286 23:43:27 version -- app/version.sh@18 -- # get_header_version minor 00:05:57.286 23:43:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:57.286 23:43:27 version -- app/version.sh@14 -- # cut -f2 00:05:57.286 23:43:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.286 23:43:27 version -- app/version.sh@18 -- # minor=9 00:05:57.286 23:43:27 version -- app/version.sh@19 -- # get_header_version patch 00:05:57.286 23:43:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:57.286 23:43:27 version -- app/version.sh@14 -- # cut -f2 00:05:57.286 23:43:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.286 23:43:27 version -- app/version.sh@19 -- # patch=0 00:05:57.286 23:43:27 version -- app/version.sh@20 -- # get_header_version suffix 00:05:57.286 23:43:27 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:57.286 23:43:27 version -- app/version.sh@14 -- # cut -f2 00:05:57.286 23:43:27 version -- app/version.sh@14 -- # tr -d '"' 00:05:57.286 23:43:27 version -- app/version.sh@20 -- # suffix=-pre 00:05:57.286 23:43:27 version -- app/version.sh@22 -- # version=24.9 00:05:57.286 23:43:27 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:57.286 23:43:27 version -- app/version.sh@28 -- # version=24.9rc0 00:05:57.286 23:43:27 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:57.286 23:43:27 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:57.545 23:43:27 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:57.545 23:43:27 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:57.545 00:05:57.545 real 0m0.111s 00:05:57.545 user 0m0.054s 00:05:57.545 sys 0m0.077s 00:05:57.545 23:43:27 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.546 23:43:27 version -- common/autotest_common.sh@10 -- # set +x 00:05:57.546 ************************************ 00:05:57.546 END TEST version 00:05:57.546 ************************************ 00:05:57.546 23:43:27 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:05:57.546 23:43:27 -- spdk/autotest.sh@198 -- # uname -s 00:05:57.546 23:43:27 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:05:57.546 23:43:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:57.546 23:43:27 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:05:57.546 23:43:27 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:05:57.546 23:43:27 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:57.546 23:43:27 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:57.546 23:43:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:57.546 23:43:27 -- common/autotest_common.sh@10 -- # set +x 00:05:57.546 23:43:27 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:57.546 23:43:27 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:05:57.546 23:43:27 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:05:57.546 23:43:27 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:05:57.546 23:43:27 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:05:57.546 23:43:27 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:05:57.546 23:43:27 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:57.546 23:43:27 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:57.546 23:43:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.546 23:43:27 -- common/autotest_common.sh@10 -- # set +x 00:05:57.546 ************************************ 00:05:57.546 START TEST nvmf_tcp 00:05:57.546 ************************************ 00:05:57.546 23:43:27 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:57.546 * Looking for test storage... 00:05:57.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:57.546 23:43:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:57.546 23:43:28 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:57.546 23:43:28 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:57.546 23:43:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:57.546 23:43:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.546 23:43:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.546 ************************************ 00:05:57.546 START TEST nvmf_target_core 00:05:57.546 ************************************ 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:57.546 * Looking for test storage... 00:05:57.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:57.546 ************************************ 00:05:57.546 START TEST nvmf_abort 00:05:57.546 ************************************ 00:05:57.546 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:57.812 * Looking for test storage... 00:05:57.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:57.812 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:05:57.813 23:43:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:59.711 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:59.712 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:59.712 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:59.712 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:59.712 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:59.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:59.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:05:59.712 00:05:59.712 --- 10.0.0.2 ping statistics --- 00:05:59.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:59.712 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:59.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:59.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:05:59.712 00:05:59.712 --- 10.0.0.1 ping statistics --- 00:05:59.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:59.712 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=3268626 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 3268626 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 3268626 ']' 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.712 23:43:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:59.712 [2024-07-24 23:43:30.321456] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:05:59.713 [2024-07-24 23:43:30.321561] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:59.971 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.971 [2024-07-24 23:43:30.388759] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:59.971 [2024-07-24 23:43:30.510816] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:59.971 [2024-07-24 23:43:30.510877] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:59.971 [2024-07-24 23:43:30.510894] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:59.971 [2024-07-24 23:43:30.510908] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:59.971 [2024-07-24 23:43:30.510919] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:59.971 [2024-07-24 23:43:30.511005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:59.971 [2024-07-24 23:43:30.511062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.971 [2024-07-24 23:43:30.511066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.907 [2024-07-24 23:43:31.274990] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.907 Malloc0 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.907 Delay0 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.907 [2024-07-24 23:43:31.345817] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.907 23:43:31 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:00.907 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.907 [2024-07-24 23:43:31.452374] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:03.435 [2024-07-24 23:43:33.519584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x262f160 is same with the state(5) to be set 00:06:03.435 Initializing NVMe Controllers 00:06:03.435 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:03.435 controller IO queue size 128 less than required 00:06:03.435 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:03.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:03.435 Initialization complete. Launching workers. 00:06:03.435 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33245 00:06:03.435 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33306, failed to submit 62 00:06:03.435 success 33249, unsuccess 57, failed 0 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:03.435 rmmod nvme_tcp 00:06:03.435 rmmod nvme_fabrics 00:06:03.435 rmmod nvme_keyring 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 3268626 ']' 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 3268626 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 3268626 ']' 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 3268626 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3268626 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3268626' 00:06:03.435 killing process with pid 3268626 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 3268626 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 3268626 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:03.435 23:43:33 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:05.968 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:05.968 00:06:05.968 real 0m7.818s 00:06:05.968 user 0m12.494s 00:06:05.968 sys 0m2.499s 00:06:05.968 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.968 23:43:35 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:05.968 ************************************ 00:06:05.968 END TEST nvmf_abort 00:06:05.968 ************************************ 00:06:05.968 23:43:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:05.968 23:43:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:05.968 23:43:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.968 23:43:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:05.968 ************************************ 00:06:05.968 START TEST nvmf_ns_hotplug_stress 00:06:05.968 ************************************ 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:05.968 * Looking for test storage... 00:06:05.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:05.968 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:05.969 23:43:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:07.342 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:07.342 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:07.342 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:07.342 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:07.342 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:07.343 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:07.601 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:07.601 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:07.601 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:07.601 23:43:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:07.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:07.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:06:07.601 00:06:07.601 --- 10.0.0.2 ping statistics --- 00:06:07.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.601 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:07.601 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:07.601 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:06:07.601 00:06:07.601 --- 10.0.0.1 ping statistics --- 00:06:07.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:07.601 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=3270871 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 3270871 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 3270871 ']' 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.601 23:43:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:07.601 [2024-07-24 23:43:38.129501] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:06:07.601 [2024-07-24 23:43:38.129609] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:07.601 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.601 [2024-07-24 23:43:38.200389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.859 [2024-07-24 23:43:38.321019] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:07.859 [2024-07-24 23:43:38.321094] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:07.859 [2024-07-24 23:43:38.321118] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:07.859 [2024-07-24 23:43:38.321132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:07.859 [2024-07-24 23:43:38.321143] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:07.859 [2024-07-24 23:43:38.321264] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.859 [2024-07-24 23:43:38.321321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:07.859 [2024-07-24 23:43:38.321326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.791 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.792 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:06:08.792 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:08.792 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:08.792 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:08.792 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:08.792 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:08.792 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:09.049 [2024-07-24 23:43:39.416593] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:09.049 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:09.307 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:09.565 [2024-07-24 23:43:39.942944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:09.565 23:43:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:09.822 23:43:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:10.080 Malloc0 00:06:10.080 23:43:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:10.339 Delay0 00:06:10.339 23:43:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.597 23:43:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:10.597 NULL1 00:06:10.855 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:10.855 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3271315 00:06:10.855 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:10.855 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:10.855 23:43:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.113 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.045 Read completed with error (sct=0, sc=11) 00:06:12.045 23:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.045 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.560 23:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:12.560 23:43:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:12.817 true 00:06:12.818 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:12.818 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.382 23:43:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.639 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:13.639 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:13.896 true 00:06:13.896 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:13.896 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.153 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.410 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:14.410 23:43:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:14.667 true 00:06:14.667 23:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:14.667 23:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.924 23:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.181 23:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:15.181 23:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:15.438 true 00:06:15.439 23:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:15.439 23:43:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.402 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.402 23:43:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.660 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:16.660 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:16.660 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:17.223 true 00:06:17.223 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:17.223 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.223 23:43:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.479 23:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:17.480 23:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:17.736 true 00:06:17.736 23:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:17.736 23:43:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.668 23:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.668 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.926 23:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:18.926 23:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:19.184 true 00:06:19.184 23:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:19.184 23:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.441 23:43:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.699 23:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:19.699 23:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:19.956 true 00:06:19.956 23:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:19.956 23:43:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.888 23:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:20.888 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:21.146 23:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:21.146 23:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:21.403 true 00:06:21.403 23:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:21.403 23:43:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.661 23:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.918 23:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:21.918 23:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:22.190 true 00:06:22.190 23:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:22.190 23:43:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.128 23:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.128 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:23.128 23:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:23.128 23:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:23.385 true 00:06:23.385 23:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:23.385 23:43:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.642 23:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.899 23:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:23.900 23:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:24.157 true 00:06:24.157 23:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:24.157 23:43:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.087 23:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.359 23:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:25.359 23:43:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:25.618 true 00:06:25.618 23:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:25.618 23:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.874 23:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.131 23:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:26.131 23:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:26.388 true 00:06:26.388 23:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:26.388 23:43:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.645 23:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.902 23:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:26.902 23:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:27.159 true 00:06:27.159 23:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:27.159 23:43:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.089 23:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.089 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.346 23:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:28.346 23:43:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:28.603 true 00:06:28.860 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:28.860 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.423 23:43:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.423 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:29.680 23:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:29.680 23:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:29.937 true 00:06:29.938 23:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:29.938 23:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.195 23:44:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.452 23:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:30.452 23:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:30.710 true 00:06:30.710 23:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:30.710 23:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.967 23:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:31.224 23:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:31.224 23:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:31.481 true 00:06:31.481 23:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:31.481 23:44:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.885 23:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.885 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.885 23:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:32.885 23:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:33.142 true 00:06:33.142 23:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:33.142 23:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.400 23:44:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:33.658 23:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:33.658 23:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:33.915 true 00:06:33.916 23:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:33.916 23:44:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.848 23:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.848 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:35.105 23:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:35.105 23:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:35.105 true 00:06:35.363 23:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:35.363 23:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.363 23:44:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.620 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:35.620 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:35.878 true 00:06:35.878 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:35.878 23:44:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.810 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:36.810 23:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.067 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.324 23:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:37.324 23:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:37.324 true 00:06:37.324 23:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:37.324 23:44:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.582 23:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.839 23:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:37.839 23:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:38.096 true 00:06:38.096 23:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:38.096 23:44:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.028 23:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.029 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:39.286 23:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:39.286 23:44:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:39.543 true 00:06:39.543 23:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:39.543 23:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.800 23:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.057 23:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:40.057 23:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:40.315 true 00:06:40.315 23:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:40.315 23:44:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:41.247 23:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:41.247 Initializing NVMe Controllers 00:06:41.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:41.247 Controller IO queue size 128, less than required. 00:06:41.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:41.247 Controller IO queue size 128, less than required. 00:06:41.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:41.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:41.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:41.247 Initialization complete. Launching workers. 00:06:41.247 ======================================================== 00:06:41.247 Latency(us) 00:06:41.247 Device Information : IOPS MiB/s Average min max 00:06:41.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1205.32 0.59 56918.69 2674.06 1051625.73 00:06:41.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11465.52 5.60 11164.38 3563.53 450807.51 00:06:41.247 ======================================================== 00:06:41.247 Total : 12670.84 6.19 15516.79 2674.06 1051625.73 00:06:41.247 00:06:41.504 23:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:41.504 23:44:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:41.761 true 00:06:41.761 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3271315 00:06:41.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3271315) - No such process 00:06:41.761 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3271315 00:06:41.762 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.019 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:42.276 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:42.276 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:42.276 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:42.276 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.276 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:42.533 null0 00:06:42.533 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.533 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.533 23:44:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:42.791 null1 00:06:42.791 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:42.791 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:42.791 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:43.047 null2 00:06:43.047 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.047 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.047 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:43.303 null3 00:06:43.303 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.303 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.303 23:44:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:43.561 null4 00:06:43.561 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.561 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.561 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:43.819 null5 00:06:43.819 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:43.819 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:43.819 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:44.076 null6 00:06:44.076 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:44.076 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:44.076 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:44.335 null7 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3275362 3275363 3275365 3275367 3275369 3275371 3275373 3275375 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.335 23:44:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.593 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:44.593 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.593 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:44.593 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:44.593 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:44.593 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:44.593 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:44.593 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:44.851 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.109 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.109 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.109 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.109 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.109 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.109 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.109 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.109 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.367 23:44:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:45.626 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.626 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:45.626 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:45.626 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:45.626 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:45.626 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:45.626 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:45.626 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:45.884 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:46.142 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.142 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:46.142 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:46.142 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:46.142 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.142 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.142 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:46.142 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.401 23:44:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:46.659 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.659 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:46.659 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:46.659 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:46.659 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:46.659 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:46.659 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.659 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:46.917 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.917 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.917 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:46.917 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.917 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.917 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:46.917 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.917 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.917 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:46.917 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:46.917 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:46.917 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:47.176 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.176 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.176 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:47.176 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.176 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.176 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.176 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.176 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:47.176 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:47.176 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.176 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.176 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:47.434 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.434 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:47.434 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:47.434 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:47.434 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:47.434 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:47.434 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:47.434 23:44:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:47.692 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:47.971 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:47.971 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.971 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:47.971 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:47.971 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:47.971 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:47.971 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:47.971 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.244 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:48.502 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:48.502 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.502 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:48.502 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:48.502 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:48.502 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:48.502 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:48.502 23:44:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.760 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.018 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.018 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.018 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.018 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.018 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.018 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.018 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.018 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.276 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.277 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.277 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.277 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.277 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.277 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.277 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.277 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.534 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.534 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.534 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.534 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.534 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.534 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.534 23:44:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.534 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:49.792 rmmod nvme_tcp 00:06:49.792 rmmod nvme_fabrics 00:06:49.792 rmmod nvme_keyring 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 3270871 ']' 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 3270871 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 3270871 ']' 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 3270871 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:06:49.792 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:49.793 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3270871 00:06:49.793 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:06:49.793 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:06:49.793 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3270871' 00:06:49.793 killing process with pid 3270871 00:06:49.793 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 3270871 00:06:49.793 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 3270871 00:06:50.051 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:50.051 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:50.051 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:50.051 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:50.051 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:50.051 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:50.051 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:50.051 23:44:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:52.581 00:06:52.581 real 0m46.685s 00:06:52.581 user 3m32.384s 00:06:52.581 sys 0m16.180s 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:52.581 ************************************ 00:06:52.581 END TEST nvmf_ns_hotplug_stress 00:06:52.581 ************************************ 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:52.581 ************************************ 00:06:52.581 START TEST nvmf_delete_subsystem 00:06:52.581 ************************************ 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:52.581 * Looking for test storage... 00:06:52.581 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.581 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:52.582 23:44:22 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:54.494 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:54.495 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:54.495 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:54.495 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:54.495 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:54.495 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:54.495 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:06:54.495 00:06:54.495 --- 10.0.0.2 ping statistics --- 00:06:54.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.495 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:54.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:54.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:06:54.495 00:06:54.495 --- 10.0.0.1 ping statistics --- 00:06:54.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:54.495 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=3278125 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 3278125 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 3278125 ']' 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.495 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.496 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.496 23:44:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.496 [2024-07-24 23:44:24.926677] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:06:54.496 [2024-07-24 23:44:24.926762] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:54.496 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.496 [2024-07-24 23:44:24.995855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.753 [2024-07-24 23:44:25.116122] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:54.753 [2024-07-24 23:44:25.116175] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:54.753 [2024-07-24 23:44:25.116191] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:54.753 [2024-07-24 23:44:25.116204] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:54.753 [2024-07-24 23:44:25.116216] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:54.753 [2024-07-24 23:44:25.116310] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.753 [2024-07-24 23:44:25.116386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.318 [2024-07-24 23:44:25.900589] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.318 [2024-07-24 23:44:25.916861] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.318 NULL1 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.318 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.575 Delay0 00:06:55.575 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.575 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.575 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.575 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:55.575 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.575 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3278280 00:06:55.575 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:55.575 23:44:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:55.575 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.575 [2024-07-24 23:44:25.991506] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:57.469 23:44:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:57.469 23:44:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:57.469 23:44:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 starting I/O failed: -6 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 starting I/O failed: -6 00:06:57.726 Write completed with error (sct=0, sc=8) 00:06:57.726 Write completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 starting I/O failed: -6 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 starting I/O failed: -6 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 starting I/O failed: -6 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Write completed with error (sct=0, sc=8) 00:06:57.726 Write completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 starting I/O failed: -6 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 starting I/O failed: -6 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 starting I/O failed: -6 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Write completed with error (sct=0, sc=8) 00:06:57.726 starting I/O failed: -6 00:06:57.726 Write completed with error (sct=0, sc=8) 00:06:57.726 Read completed with error (sct=0, sc=8) 00:06:57.726 Write completed with error (sct=0, sc=8) 00:06:57.726 Write completed with error (sct=0, sc=8) 00:06:57.727 starting I/O failed: -6 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 starting I/O failed: -6 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 starting I/O failed: -6 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 starting I/O failed: -6 00:06:57.727 [2024-07-24 23:44:28.215569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec18f0 is same with the state(5) to be set 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 starting I/O failed: -6 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 starting I/O failed: -6 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 starting I/O failed: -6 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 starting I/O failed: -6 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 starting I/O failed: -6 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 starting I/O failed: -6 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 starting I/O failed: -6 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 starting I/O failed: -6 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 starting I/O failed: -6 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 starting I/O failed: -6 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 [2024-07-24 23:44:28.216138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f23d400d660 is same with the state(5) to be set 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Write completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:57.727 Read completed with error (sct=0, sc=8) 00:06:58.659 [2024-07-24 23:44:29.168850] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec2ac0 is same with the state(5) to be set 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 [2024-07-24 23:44:29.218736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec13e0 is same with the state(5) to be set 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 [2024-07-24 23:44:29.218950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec15c0 is same with the state(5) to be set 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 [2024-07-24 23:44:29.219151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec1c20 is same with the state(5) to be set 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Write completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 Read completed with error (sct=0, sc=8) 00:06:58.659 [2024-07-24 23:44:29.219299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f23d400d330 is same with the state(5) to be set 00:06:58.659 Initializing NVMe Controllers 00:06:58.659 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:58.659 Controller IO queue size 128, less than required. 00:06:58.659 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:58.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:58.659 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:58.659 Initialization complete. Launching workers. 00:06:58.659 ======================================================== 00:06:58.659 Latency(us) 00:06:58.659 Device Information : IOPS MiB/s Average min max 00:06:58.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 179.66 0.09 958469.14 980.66 1012746.00 00:06:58.659 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.35 0.08 881019.96 336.33 1012800.58 00:06:58.659 ======================================================== 00:06:58.660 Total : 334.02 0.16 922679.10 336.33 1012800.58 00:06:58.660 00:06:58.660 [2024-07-24 23:44:29.220321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec2ac0 (9): Bad file descriptor 00:06:58.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:58.660 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:58.660 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:58.660 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3278280 00:06:58.660 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3278280 00:06:59.225 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3278280) - No such process 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3278280 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 3278280 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 3278280 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.225 [2024-07-24 23:44:29.743908] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3278803 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3278803 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:59.225 23:44:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:59.225 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.225 [2024-07-24 23:44:29.809486] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:59.790 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:59.790 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3278803 00:06:59.790 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:00.355 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.355 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3278803 00:07:00.355 23:44:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:00.919 23:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:00.919 23:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3278803 00:07:00.919 23:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.177 23:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.177 23:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3278803 00:07:01.177 23:44:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:01.742 23:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:01.742 23:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3278803 00:07:01.742 23:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.306 23:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.306 23:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3278803 00:07:02.306 23:44:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:02.563 Initializing NVMe Controllers 00:07:02.563 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:02.563 Controller IO queue size 128, less than required. 00:07:02.563 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:02.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:02.563 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:02.563 Initialization complete. Launching workers. 00:07:02.563 ======================================================== 00:07:02.563 Latency(us) 00:07:02.563 Device Information : IOPS MiB/s Average min max 00:07:02.563 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003321.35 1000203.32 1010577.84 00:07:02.563 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004656.69 1000197.70 1040545.35 00:07:02.563 ======================================================== 00:07:02.563 Total : 256.00 0.12 1003989.02 1000197.70 1040545.35 00:07:02.563 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3278803 00:07:02.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3278803) - No such process 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3278803 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:02.821 rmmod nvme_tcp 00:07:02.821 rmmod nvme_fabrics 00:07:02.821 rmmod nvme_keyring 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 3278125 ']' 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 3278125 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 3278125 ']' 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 3278125 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3278125 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3278125' 00:07:02.821 killing process with pid 3278125 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 3278125 00:07:02.821 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 3278125 00:07:03.078 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:03.078 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:03.078 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:03.078 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:03.078 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:03.078 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:03.078 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:03.078 23:44:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.606 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:05.606 00:07:05.606 real 0m12.941s 00:07:05.606 user 0m29.503s 00:07:05.606 sys 0m2.916s 00:07:05.606 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.606 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.606 ************************************ 00:07:05.606 END TEST nvmf_delete_subsystem 00:07:05.606 ************************************ 00:07:05.606 23:44:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:05.606 23:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:05.606 23:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.606 23:44:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:05.606 ************************************ 00:07:05.606 START TEST nvmf_host_management 00:07:05.606 ************************************ 00:07:05.606 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:05.606 * Looking for test storage... 00:07:05.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:05.606 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:05.606 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:05.606 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:05.606 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:05.606 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:05.606 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:05.607 23:44:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:07.541 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:07.541 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:07.541 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:07.541 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:07.541 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:07.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:07.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:07:07.542 00:07:07.542 --- 10.0.0.2 ping statistics --- 00:07:07.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.542 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:07.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:07.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:07:07.542 00:07:07.542 --- 10.0.0.1 ping statistics --- 00:07:07.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:07.542 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=3281146 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 3281146 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3281146 ']' 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:07.542 23:44:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.542 [2024-07-24 23:44:37.999656] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:07:07.542 [2024-07-24 23:44:37.999752] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:07.542 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.542 [2024-07-24 23:44:38.071852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:07.800 [2024-07-24 23:44:38.186119] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:07.800 [2024-07-24 23:44:38.186171] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:07.800 [2024-07-24 23:44:38.186204] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:07.800 [2024-07-24 23:44:38.186216] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:07.800 [2024-07-24 23:44:38.186226] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:07.800 [2024-07-24 23:44:38.188277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.800 [2024-07-24 23:44:38.188365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:07.800 [2024-07-24 23:44:38.188433] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:07.800 [2024-07-24 23:44:38.188438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.800 [2024-07-24 23:44:38.350757] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.800 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.800 Malloc0 00:07:08.057 [2024-07-24 23:44:38.416135] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:08.057 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.057 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3281195 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3281195 /var/tmp/bdevperf.sock 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 3281195 ']' 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:08.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:08.058 { 00:07:08.058 "params": { 00:07:08.058 "name": "Nvme$subsystem", 00:07:08.058 "trtype": "$TEST_TRANSPORT", 00:07:08.058 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:08.058 "adrfam": "ipv4", 00:07:08.058 "trsvcid": "$NVMF_PORT", 00:07:08.058 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:08.058 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:08.058 "hdgst": ${hdgst:-false}, 00:07:08.058 "ddgst": ${ddgst:-false} 00:07:08.058 }, 00:07:08.058 "method": "bdev_nvme_attach_controller" 00:07:08.058 } 00:07:08.058 EOF 00:07:08.058 )") 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:08.058 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:08.058 "params": { 00:07:08.058 "name": "Nvme0", 00:07:08.058 "trtype": "tcp", 00:07:08.058 "traddr": "10.0.0.2", 00:07:08.058 "adrfam": "ipv4", 00:07:08.058 "trsvcid": "4420", 00:07:08.058 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:08.058 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:08.058 "hdgst": false, 00:07:08.058 "ddgst": false 00:07:08.058 }, 00:07:08.058 "method": "bdev_nvme_attach_controller" 00:07:08.058 }' 00:07:08.058 [2024-07-24 23:44:38.496894] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:07:08.058 [2024-07-24 23:44:38.496968] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281195 ] 00:07:08.058 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.058 [2024-07-24 23:44:38.557001] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.315 [2024-07-24 23:44:38.675894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.315 Running I/O for 10 seconds... 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.315 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.573 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.573 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:07:08.573 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:07:08.573 23:44:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.832 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.832 [2024-07-24 23:44:39.258855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.258921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.258937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.258950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.258962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.258975] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.258999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259327] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259408] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.832 [2024-07-24 23:44:39.259480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259718] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259730] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf9f650 is same with the state(5) to be set 00:07:08.833 [2024-07-24 23:44:39.259848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.259886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.259916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.259933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.259950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.259965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.259981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.259995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.833 [2024-07-24 23:44:39.260787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.833 [2024-07-24 23:44:39.260803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.260817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.260833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.260847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.260863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.260877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.260893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.260908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.260925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.260939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.260956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.260970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.260986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:08.834 [2024-07-24 23:44:39.261880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:08.834 [2024-07-24 23:44:39.261895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbda5a0 is same with the state(5) to be set 00:07:08.834 [2024-07-24 23:44:39.261968] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbda5a0 was disconnected and freed. reset controller. 00:07:08.834 [2024-07-24 23:44:39.263127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:08.834 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.834 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:08.834 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:08.834 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:08.834 task offset: 73728 on job bdev=Nvme0n1 fails 00:07:08.834 00:07:08.834 Latency(us) 00:07:08.834 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.834 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:08.835 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:08.835 Verification LBA range: start 0x0 length 0x400 00:07:08.835 Nvme0n1 : 0.40 1435.91 89.74 159.55 0.00 38978.31 6043.88 34564.17 00:07:08.835 =================================================================================================================== 00:07:08.835 Total : 1435.91 89.74 159.55 0.00 38978.31 6043.88 34564.17 00:07:08.835 [2024-07-24 23:44:39.265172] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:08.835 [2024-07-24 23:44:39.265202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c9790 (9): Bad file descriptor 00:07:08.835 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:08.835 23:44:39 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:08.835 [2024-07-24 23:44:39.311892] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:09.766 23:44:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3281195 00:07:09.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3281195) - No such process 00:07:09.766 23:44:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:09.766 23:44:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:09.766 23:44:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:09.766 23:44:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:09.766 23:44:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:09.766 23:44:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:09.766 23:44:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:09.766 23:44:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:09.766 { 00:07:09.766 "params": { 00:07:09.766 "name": "Nvme$subsystem", 00:07:09.766 "trtype": "$TEST_TRANSPORT", 00:07:09.766 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:09.766 "adrfam": "ipv4", 00:07:09.766 "trsvcid": "$NVMF_PORT", 00:07:09.766 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:09.766 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:09.766 "hdgst": ${hdgst:-false}, 00:07:09.766 "ddgst": ${ddgst:-false} 00:07:09.766 }, 00:07:09.766 "method": "bdev_nvme_attach_controller" 00:07:09.766 } 00:07:09.766 EOF 00:07:09.766 )") 00:07:09.766 23:44:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:09.766 23:44:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:09.766 23:44:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:09.766 23:44:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:09.766 "params": { 00:07:09.766 "name": "Nvme0", 00:07:09.766 "trtype": "tcp", 00:07:09.766 "traddr": "10.0.0.2", 00:07:09.766 "adrfam": "ipv4", 00:07:09.766 "trsvcid": "4420", 00:07:09.766 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:09.766 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:09.766 "hdgst": false, 00:07:09.766 "ddgst": false 00:07:09.766 }, 00:07:09.766 "method": "bdev_nvme_attach_controller" 00:07:09.766 }' 00:07:09.766 [2024-07-24 23:44:40.321588] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:07:09.766 [2024-07-24 23:44:40.321691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3281472 ] 00:07:09.766 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.023 [2024-07-24 23:44:40.382427] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.023 [2024-07-24 23:44:40.494335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.280 Running I/O for 1 seconds... 00:07:11.212 00:07:11.212 Latency(us) 00:07:11.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.212 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:11.212 Verification LBA range: start 0x0 length 0x400 00:07:11.212 Nvme0n1 : 1.03 1613.90 100.87 0.00 0.00 39028.35 7621.59 33399.09 00:07:11.212 =================================================================================================================== 00:07:11.212 Total : 1613.90 100.87 0.00 0.00 39028.35 7621.59 33399.09 00:07:11.469 23:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:11.469 23:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:11.469 23:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:11.469 23:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:11.469 23:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:11.469 23:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:11.469 23:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:07:11.469 23:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:11.469 23:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:07:11.469 23:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:11.469 23:44:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:11.469 rmmod nvme_tcp 00:07:11.469 rmmod nvme_fabrics 00:07:11.469 rmmod nvme_keyring 00:07:11.469 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:11.469 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:07:11.469 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:07:11.469 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 3281146 ']' 00:07:11.469 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 3281146 00:07:11.469 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 3281146 ']' 00:07:11.469 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 3281146 00:07:11.469 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:07:11.469 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.469 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3281146 00:07:11.726 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:11.726 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:11.726 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3281146' 00:07:11.726 killing process with pid 3281146 00:07:11.726 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 3281146 00:07:11.726 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 3281146 00:07:11.984 [2024-07-24 23:44:42.339413] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:11.984 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:11.984 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:11.984 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:11.984 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:11.984 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:11.984 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.984 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.984 23:44:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:13.879 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:13.880 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:13.880 00:07:13.880 real 0m8.689s 00:07:13.880 user 0m19.611s 00:07:13.880 sys 0m2.640s 00:07:13.880 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:13.880 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:13.880 ************************************ 00:07:13.880 END TEST nvmf_host_management 00:07:13.880 ************************************ 00:07:13.880 23:44:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:13.880 23:44:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:13.880 23:44:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:13.880 23:44:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:13.880 ************************************ 00:07:13.880 START TEST nvmf_lvol 00:07:13.880 ************************************ 00:07:13.880 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:14.138 * Looking for test storage... 00:07:14.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.138 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:07:14.139 23:44:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.038 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:16.038 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:07:16.038 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:16.038 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:16.038 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:16.038 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:16.038 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:16.038 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:07:16.038 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:16.038 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:07:16.038 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:07:16.038 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:07:16.038 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:07:16.038 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:16.039 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:16.039 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:16.039 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:16.039 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:16.039 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:16.297 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:16.297 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:16.297 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:16.297 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:16.297 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:16.297 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:16.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:16.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:07:16.298 00:07:16.298 --- 10.0.0.2 ping statistics --- 00:07:16.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.298 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:16.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:16.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:07:16.298 00:07:16.298 --- 10.0.0.1 ping statistics --- 00:07:16.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:16.298 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=3283668 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 3283668 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 3283668 ']' 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.298 23:44:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:16.298 [2024-07-24 23:44:46.862031] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:07:16.298 [2024-07-24 23:44:46.862126] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:16.298 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.555 [2024-07-24 23:44:46.940767] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.555 [2024-07-24 23:44:47.084579] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:16.555 [2024-07-24 23:44:47.084636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:16.555 [2024-07-24 23:44:47.084674] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:16.555 [2024-07-24 23:44:47.084694] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:16.555 [2024-07-24 23:44:47.084712] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:16.555 [2024-07-24 23:44:47.084805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.555 [2024-07-24 23:44:47.084866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.555 [2024-07-24 23:44:47.084874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.488 23:44:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.488 23:44:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:07:17.488 23:44:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:17.488 23:44:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:17.488 23:44:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:17.488 23:44:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.488 23:44:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:17.745 [2024-07-24 23:44:48.158379] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.745 23:44:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:18.002 23:44:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:18.002 23:44:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:18.259 23:44:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:18.259 23:44:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:18.517 23:44:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:18.774 23:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=fac67eae-ed24-4ddf-b1eb-799da94ddbe2 00:07:18.774 23:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fac67eae-ed24-4ddf-b1eb-799da94ddbe2 lvol 20 00:07:19.030 23:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3c6c6161-510f-4378-aac5-46b045595d90 00:07:19.030 23:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:19.287 23:44:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3c6c6161-510f-4378-aac5-46b045595d90 00:07:19.544 23:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:19.801 [2024-07-24 23:44:50.232082] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:19.801 23:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:20.058 23:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3284107 00:07:20.058 23:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:20.058 23:44:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:20.058 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.990 23:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3c6c6161-510f-4378-aac5-46b045595d90 MY_SNAPSHOT 00:07:21.247 23:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f0090271-a445-44de-9f97-be6850bd09f4 00:07:21.247 23:44:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3c6c6161-510f-4378-aac5-46b045595d90 30 00:07:21.810 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f0090271-a445-44de-9f97-be6850bd09f4 MY_CLONE 00:07:21.810 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f1b6ba1b-d7e3-48e6-a057-21fcb6a78b3b 00:07:21.810 23:44:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f1b6ba1b-d7e3-48e6-a057-21fcb6a78b3b 00:07:22.742 23:44:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3284107 00:07:30.879 Initializing NVMe Controllers 00:07:30.879 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:30.879 Controller IO queue size 128, less than required. 00:07:30.879 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:30.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:30.879 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:30.879 Initialization complete. Launching workers. 00:07:30.879 ======================================================== 00:07:30.879 Latency(us) 00:07:30.879 Device Information : IOPS MiB/s Average min max 00:07:30.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10300.00 40.23 12432.09 1485.94 62271.47 00:07:30.879 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10538.80 41.17 12149.99 2116.65 54779.36 00:07:30.879 ======================================================== 00:07:30.879 Total : 20838.80 81.40 12289.42 1485.94 62271.47 00:07:30.879 00:07:30.879 23:45:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:30.879 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3c6c6161-510f-4378-aac5-46b045595d90 00:07:30.879 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fac67eae-ed24-4ddf-b1eb-799da94ddbe2 00:07:31.137 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:31.137 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:31.137 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:31.137 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:31.137 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:31.137 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.137 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:31.137 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.137 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.137 rmmod nvme_tcp 00:07:31.394 rmmod nvme_fabrics 00:07:31.394 rmmod nvme_keyring 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 3283668 ']' 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 3283668 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 3283668 ']' 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 3283668 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3283668 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3283668' 00:07:31.394 killing process with pid 3283668 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 3283668 00:07:31.394 23:45:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 3283668 00:07:31.652 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:31.652 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:31.652 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:31.652 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:31.652 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:31.652 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:31.652 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:31.652 23:45:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:34.179 00:07:34.179 real 0m19.737s 00:07:34.179 user 1m7.210s 00:07:34.179 sys 0m5.432s 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:34.179 ************************************ 00:07:34.179 END TEST nvmf_lvol 00:07:34.179 ************************************ 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:34.179 ************************************ 00:07:34.179 START TEST nvmf_lvs_grow 00:07:34.179 ************************************ 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:34.179 * Looking for test storage... 00:07:34.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.179 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:07:34.180 23:45:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:36.079 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:36.079 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:36.079 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.079 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:36.080 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:36.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:07:36.080 00:07:36.080 --- 10.0.0.2 ping statistics --- 00:07:36.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.080 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:36.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:07:36.080 00:07:36.080 --- 10.0.0.1 ping statistics --- 00:07:36.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.080 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=3287490 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 3287490 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 3287490 ']' 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:36.080 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.080 [2024-07-24 23:45:06.522079] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:07:36.080 [2024-07-24 23:45:06.522171] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.080 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.080 [2024-07-24 23:45:06.595560] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.337 [2024-07-24 23:45:06.722783] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.337 [2024-07-24 23:45:06.722841] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.337 [2024-07-24 23:45:06.722855] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.337 [2024-07-24 23:45:06.722866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.337 [2024-07-24 23:45:06.722876] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.337 [2024-07-24 23:45:06.722902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.338 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.338 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:07:36.338 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.338 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:36.338 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.338 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.338 23:45:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:36.595 [2024-07-24 23:45:07.097511] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.595 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:36.595 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:36.595 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.595 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.595 ************************************ 00:07:36.595 START TEST lvs_grow_clean 00:07:36.595 ************************************ 00:07:36.595 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:07:36.595 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:36.595 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:36.595 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:36.595 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:36.595 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:36.595 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:36.595 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:36.595 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:36.595 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:36.852 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:36.852 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:37.110 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c0c254c4-d7c7-4e19-8926-d4112077319a 00:07:37.110 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c254c4-d7c7-4e19-8926-d4112077319a 00:07:37.110 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:37.369 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:37.369 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:37.369 23:45:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c0c254c4-d7c7-4e19-8926-d4112077319a lvol 150 00:07:37.626 23:45:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=783b020b-0e0c-43a8-8155-ff353c553aae 00:07:37.626 23:45:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:37.626 23:45:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:37.883 [2024-07-24 23:45:08.463520] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:37.883 [2024-07-24 23:45:08.463605] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:37.883 true 00:07:37.883 23:45:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c254c4-d7c7-4e19-8926-d4112077319a 00:07:37.883 23:45:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:38.141 23:45:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:38.141 23:45:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:38.399 23:45:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 783b020b-0e0c-43a8-8155-ff353c553aae 00:07:38.656 23:45:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:38.914 [2024-07-24 23:45:09.486631] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.914 23:45:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:39.172 23:45:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3288435 00:07:39.172 23:45:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:39.172 23:45:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:39.172 23:45:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3288435 /var/tmp/bdevperf.sock 00:07:39.172 23:45:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 3288435 ']' 00:07:39.172 23:45:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:39.172 23:45:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.172 23:45:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:39.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:39.172 23:45:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.172 23:45:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:39.430 [2024-07-24 23:45:09.796643] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:07:39.430 [2024-07-24 23:45:09.796728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3288435 ] 00:07:39.430 EAL: No free 2048 kB hugepages reported on node 1 00:07:39.430 [2024-07-24 23:45:09.857897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.430 [2024-07-24 23:45:09.974190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.688 23:45:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:39.688 23:45:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:07:39.688 23:45:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:39.946 Nvme0n1 00:07:39.946 23:45:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:40.204 [ 00:07:40.204 { 00:07:40.204 "name": "Nvme0n1", 00:07:40.204 "aliases": [ 00:07:40.204 "783b020b-0e0c-43a8-8155-ff353c553aae" 00:07:40.204 ], 00:07:40.204 "product_name": "NVMe disk", 00:07:40.204 "block_size": 4096, 00:07:40.204 "num_blocks": 38912, 00:07:40.204 "uuid": "783b020b-0e0c-43a8-8155-ff353c553aae", 00:07:40.204 "assigned_rate_limits": { 00:07:40.204 "rw_ios_per_sec": 0, 00:07:40.204 "rw_mbytes_per_sec": 0, 00:07:40.204 "r_mbytes_per_sec": 0, 00:07:40.204 "w_mbytes_per_sec": 0 00:07:40.204 }, 00:07:40.204 "claimed": false, 00:07:40.204 "zoned": false, 00:07:40.204 "supported_io_types": { 00:07:40.204 "read": true, 00:07:40.204 "write": true, 00:07:40.204 "unmap": true, 00:07:40.204 "flush": true, 00:07:40.204 "reset": true, 00:07:40.204 "nvme_admin": true, 00:07:40.204 "nvme_io": true, 00:07:40.204 "nvme_io_md": false, 00:07:40.204 "write_zeroes": true, 00:07:40.204 "zcopy": false, 00:07:40.204 "get_zone_info": false, 00:07:40.204 "zone_management": false, 00:07:40.204 "zone_append": false, 00:07:40.204 "compare": true, 00:07:40.204 "compare_and_write": true, 00:07:40.204 "abort": true, 00:07:40.204 "seek_hole": false, 00:07:40.204 "seek_data": false, 00:07:40.204 "copy": true, 00:07:40.204 "nvme_iov_md": false 00:07:40.204 }, 00:07:40.204 "memory_domains": [ 00:07:40.204 { 00:07:40.204 "dma_device_id": "system", 00:07:40.204 "dma_device_type": 1 00:07:40.204 } 00:07:40.204 ], 00:07:40.204 "driver_specific": { 00:07:40.204 "nvme": [ 00:07:40.204 { 00:07:40.204 "trid": { 00:07:40.204 "trtype": "TCP", 00:07:40.204 "adrfam": "IPv4", 00:07:40.204 "traddr": "10.0.0.2", 00:07:40.204 "trsvcid": "4420", 00:07:40.204 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:40.204 }, 00:07:40.204 "ctrlr_data": { 00:07:40.204 "cntlid": 1, 00:07:40.204 "vendor_id": "0x8086", 00:07:40.204 "model_number": "SPDK bdev Controller", 00:07:40.204 "serial_number": "SPDK0", 00:07:40.204 "firmware_revision": "24.09", 00:07:40.204 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:40.204 "oacs": { 00:07:40.204 "security": 0, 00:07:40.204 "format": 0, 00:07:40.204 "firmware": 0, 00:07:40.204 "ns_manage": 0 00:07:40.204 }, 00:07:40.204 "multi_ctrlr": true, 00:07:40.204 "ana_reporting": false 00:07:40.204 }, 00:07:40.204 "vs": { 00:07:40.204 "nvme_version": "1.3" 00:07:40.204 }, 00:07:40.204 "ns_data": { 00:07:40.204 "id": 1, 00:07:40.204 "can_share": true 00:07:40.204 } 00:07:40.204 } 00:07:40.204 ], 00:07:40.204 "mp_policy": "active_passive" 00:07:40.204 } 00:07:40.204 } 00:07:40.204 ] 00:07:40.204 23:45:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3288573 00:07:40.204 23:45:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:40.204 23:45:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:40.204 Running I/O for 10 seconds... 00:07:41.576 Latency(us) 00:07:41.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.576 Nvme0n1 : 1.00 14544.00 56.81 0.00 0.00 0.00 0.00 0.00 00:07:41.576 =================================================================================================================== 00:07:41.576 Total : 14544.00 56.81 0.00 0.00 0.00 0.00 0.00 00:07:41.576 00:07:42.141 23:45:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c0c254c4-d7c7-4e19-8926-d4112077319a 00:07:42.399 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.399 Nvme0n1 : 2.00 14588.00 56.98 0.00 0.00 0.00 0.00 0.00 00:07:42.399 =================================================================================================================== 00:07:42.399 Total : 14588.00 56.98 0.00 0.00 0.00 0.00 0.00 00:07:42.399 00:07:42.399 true 00:07:42.399 23:45:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c254c4-d7c7-4e19-8926-d4112077319a 00:07:42.399 23:45:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:42.656 23:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:42.656 23:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:42.656 23:45:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3288573 00:07:43.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.222 Nvme0n1 : 3.00 14700.00 57.42 0.00 0.00 0.00 0.00 0.00 00:07:43.222 =================================================================================================================== 00:07:43.222 Total : 14700.00 57.42 0.00 0.00 0.00 0.00 0.00 00:07:43.222 00:07:44.595 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.595 Nvme0n1 : 4.00 14725.00 57.52 0.00 0.00 0.00 0.00 0.00 00:07:44.595 =================================================================================================================== 00:07:44.595 Total : 14725.00 57.52 0.00 0.00 0.00 0.00 0.00 00:07:44.595 00:07:45.528 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.528 Nvme0n1 : 5.00 14790.80 57.78 0.00 0.00 0.00 0.00 0.00 00:07:45.528 =================================================================================================================== 00:07:45.528 Total : 14790.80 57.78 0.00 0.00 0.00 0.00 0.00 00:07:45.528 00:07:46.459 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.459 Nvme0n1 : 6.00 14865.67 58.07 0.00 0.00 0.00 0.00 0.00 00:07:46.459 =================================================================================================================== 00:07:46.459 Total : 14865.67 58.07 0.00 0.00 0.00 0.00 0.00 00:07:46.459 00:07:47.392 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.392 Nvme0n1 : 7.00 14901.43 58.21 0.00 0.00 0.00 0.00 0.00 00:07:47.392 =================================================================================================================== 00:07:47.392 Total : 14901.43 58.21 0.00 0.00 0.00 0.00 0.00 00:07:47.392 00:07:48.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.356 Nvme0n1 : 8.00 14936.38 58.35 0.00 0.00 0.00 0.00 0.00 00:07:48.356 =================================================================================================================== 00:07:48.356 Total : 14936.38 58.35 0.00 0.00 0.00 0.00 0.00 00:07:48.356 00:07:49.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.288 Nvme0n1 : 9.00 14972.67 58.49 0.00 0.00 0.00 0.00 0.00 00:07:49.289 =================================================================================================================== 00:07:49.289 Total : 14972.67 58.49 0.00 0.00 0.00 0.00 0.00 00:07:49.289 00:07:50.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.222 Nvme0n1 : 10.00 14969.00 58.47 0.00 0.00 0.00 0.00 0.00 00:07:50.222 =================================================================================================================== 00:07:50.222 Total : 14969.00 58.47 0.00 0.00 0.00 0.00 0.00 00:07:50.222 00:07:50.222 00:07:50.222 Latency(us) 00:07:50.222 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:50.222 Nvme0n1 : 10.00 14977.37 58.51 0.00 0.00 8541.78 2961.26 15922.82 00:07:50.222 =================================================================================================================== 00:07:50.222 Total : 14977.37 58.51 0.00 0.00 8541.78 2961.26 15922.82 00:07:50.222 0 00:07:50.222 23:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3288435 00:07:50.222 23:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 3288435 ']' 00:07:50.222 23:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 3288435 00:07:50.222 23:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:07:50.222 23:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:50.222 23:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3288435 00:07:50.481 23:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:50.481 23:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:50.481 23:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3288435' 00:07:50.481 killing process with pid 3288435 00:07:50.481 23:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 3288435 00:07:50.481 Received shutdown signal, test time was about 10.000000 seconds 00:07:50.481 00:07:50.481 Latency(us) 00:07:50.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.481 =================================================================================================================== 00:07:50.481 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:50.481 23:45:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 3288435 00:07:50.738 23:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:50.995 23:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:51.252 23:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c254c4-d7c7-4e19-8926-d4112077319a 00:07:51.252 23:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:51.510 23:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:51.510 23:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:51.510 23:45:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:51.768 [2024-07-24 23:45:22.167049] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:51.768 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c254c4-d7c7-4e19-8926-d4112077319a 00:07:51.768 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:07:51.768 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c254c4-d7c7-4e19-8926-d4112077319a 00:07:51.768 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.768 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.768 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.768 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.768 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.768 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:51.768 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:51.768 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:51.768 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c254c4-d7c7-4e19-8926-d4112077319a 00:07:52.025 request: 00:07:52.025 { 00:07:52.025 "uuid": "c0c254c4-d7c7-4e19-8926-d4112077319a", 00:07:52.025 "method": "bdev_lvol_get_lvstores", 00:07:52.025 "req_id": 1 00:07:52.026 } 00:07:52.026 Got JSON-RPC error response 00:07:52.026 response: 00:07:52.026 { 00:07:52.026 "code": -19, 00:07:52.026 "message": "No such device" 00:07:52.026 } 00:07:52.026 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:07:52.026 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:52.026 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:52.026 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:52.026 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:52.283 aio_bdev 00:07:52.283 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 783b020b-0e0c-43a8-8155-ff353c553aae 00:07:52.283 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=783b020b-0e0c-43a8-8155-ff353c553aae 00:07:52.283 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:52.283 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:07:52.283 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:52.283 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:52.283 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:52.541 23:45:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 783b020b-0e0c-43a8-8155-ff353c553aae -t 2000 00:07:52.798 [ 00:07:52.798 { 00:07:52.798 "name": "783b020b-0e0c-43a8-8155-ff353c553aae", 00:07:52.798 "aliases": [ 00:07:52.798 "lvs/lvol" 00:07:52.798 ], 00:07:52.798 "product_name": "Logical Volume", 00:07:52.798 "block_size": 4096, 00:07:52.798 "num_blocks": 38912, 00:07:52.798 "uuid": "783b020b-0e0c-43a8-8155-ff353c553aae", 00:07:52.798 "assigned_rate_limits": { 00:07:52.798 "rw_ios_per_sec": 0, 00:07:52.798 "rw_mbytes_per_sec": 0, 00:07:52.798 "r_mbytes_per_sec": 0, 00:07:52.798 "w_mbytes_per_sec": 0 00:07:52.798 }, 00:07:52.798 "claimed": false, 00:07:52.798 "zoned": false, 00:07:52.798 "supported_io_types": { 00:07:52.798 "read": true, 00:07:52.798 "write": true, 00:07:52.798 "unmap": true, 00:07:52.798 "flush": false, 00:07:52.798 "reset": true, 00:07:52.798 "nvme_admin": false, 00:07:52.798 "nvme_io": false, 00:07:52.798 "nvme_io_md": false, 00:07:52.798 "write_zeroes": true, 00:07:52.798 "zcopy": false, 00:07:52.798 "get_zone_info": false, 00:07:52.798 "zone_management": false, 00:07:52.798 "zone_append": false, 00:07:52.798 "compare": false, 00:07:52.798 "compare_and_write": false, 00:07:52.798 "abort": false, 00:07:52.798 "seek_hole": true, 00:07:52.798 "seek_data": true, 00:07:52.798 "copy": false, 00:07:52.798 "nvme_iov_md": false 00:07:52.798 }, 00:07:52.798 "driver_specific": { 00:07:52.798 "lvol": { 00:07:52.798 "lvol_store_uuid": "c0c254c4-d7c7-4e19-8926-d4112077319a", 00:07:52.798 "base_bdev": "aio_bdev", 00:07:52.798 "thin_provision": false, 00:07:52.798 "num_allocated_clusters": 38, 00:07:52.798 "snapshot": false, 00:07:52.798 "clone": false, 00:07:52.798 "esnap_clone": false 00:07:52.798 } 00:07:52.798 } 00:07:52.798 } 00:07:52.798 ] 00:07:52.798 23:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:07:52.798 23:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c254c4-d7c7-4e19-8926-d4112077319a 00:07:52.798 23:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:53.055 23:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:53.055 23:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c0c254c4-d7c7-4e19-8926-d4112077319a 00:07:53.055 23:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:53.312 23:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:53.312 23:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 783b020b-0e0c-43a8-8155-ff353c553aae 00:07:53.569 23:45:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c0c254c4-d7c7-4e19-8926-d4112077319a 00:07:53.826 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:54.083 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:54.083 00:07:54.083 real 0m17.381s 00:07:54.083 user 0m16.688s 00:07:54.083 sys 0m2.023s 00:07:54.083 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.083 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:54.083 ************************************ 00:07:54.083 END TEST lvs_grow_clean 00:07:54.083 ************************************ 00:07:54.083 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:54.083 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.083 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.083 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:54.083 ************************************ 00:07:54.083 START TEST lvs_grow_dirty 00:07:54.083 ************************************ 00:07:54.083 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:07:54.083 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:54.083 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:54.084 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:54.084 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:54.084 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:54.084 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:54.084 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:54.084 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:54.084 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:54.341 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:54.341 23:45:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:54.598 23:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=42105277-9265-4a97-9083-a30b0bd42141 00:07:54.598 23:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:54.598 23:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42105277-9265-4a97-9083-a30b0bd42141 00:07:54.855 23:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:54.855 23:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:54.855 23:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 42105277-9265-4a97-9083-a30b0bd42141 lvol 150 00:07:55.117 23:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=930a37e0-5a0c-43d5-8b36-080d2247f22f 00:07:55.117 23:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:55.117 23:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:55.374 [2024-07-24 23:45:25.922502] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:55.374 [2024-07-24 23:45:25.922606] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:55.374 true 00:07:55.374 23:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42105277-9265-4a97-9083-a30b0bd42141 00:07:55.374 23:45:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:55.632 23:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:55.632 23:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:55.889 23:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 930a37e0-5a0c-43d5-8b36-080d2247f22f 00:07:56.147 23:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:56.405 [2024-07-24 23:45:26.905493] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:56.405 23:45:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:56.663 23:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3290546 00:07:56.663 23:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:56.663 23:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:56.663 23:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3290546 /var/tmp/bdevperf.sock 00:07:56.663 23:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3290546 ']' 00:07:56.663 23:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:56.663 23:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.663 23:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:56.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:56.663 23:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.663 23:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:56.663 [2024-07-24 23:45:27.210383] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:07:56.663 [2024-07-24 23:45:27.210471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3290546 ] 00:07:56.663 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.663 [2024-07-24 23:45:27.272237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.920 [2024-07-24 23:45:27.389388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.920 23:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:56.920 23:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:07:56.920 23:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:57.485 Nvme0n1 00:07:57.485 23:45:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:57.743 [ 00:07:57.743 { 00:07:57.743 "name": "Nvme0n1", 00:07:57.743 "aliases": [ 00:07:57.743 "930a37e0-5a0c-43d5-8b36-080d2247f22f" 00:07:57.743 ], 00:07:57.743 "product_name": "NVMe disk", 00:07:57.743 "block_size": 4096, 00:07:57.743 "num_blocks": 38912, 00:07:57.743 "uuid": "930a37e0-5a0c-43d5-8b36-080d2247f22f", 00:07:57.743 "assigned_rate_limits": { 00:07:57.743 "rw_ios_per_sec": 0, 00:07:57.743 "rw_mbytes_per_sec": 0, 00:07:57.743 "r_mbytes_per_sec": 0, 00:07:57.743 "w_mbytes_per_sec": 0 00:07:57.743 }, 00:07:57.743 "claimed": false, 00:07:57.743 "zoned": false, 00:07:57.743 "supported_io_types": { 00:07:57.743 "read": true, 00:07:57.743 "write": true, 00:07:57.743 "unmap": true, 00:07:57.743 "flush": true, 00:07:57.743 "reset": true, 00:07:57.743 "nvme_admin": true, 00:07:57.743 "nvme_io": true, 00:07:57.743 "nvme_io_md": false, 00:07:57.743 "write_zeroes": true, 00:07:57.743 "zcopy": false, 00:07:57.743 "get_zone_info": false, 00:07:57.743 "zone_management": false, 00:07:57.743 "zone_append": false, 00:07:57.743 "compare": true, 00:07:57.743 "compare_and_write": true, 00:07:57.743 "abort": true, 00:07:57.743 "seek_hole": false, 00:07:57.743 "seek_data": false, 00:07:57.743 "copy": true, 00:07:57.743 "nvme_iov_md": false 00:07:57.743 }, 00:07:57.743 "memory_domains": [ 00:07:57.743 { 00:07:57.743 "dma_device_id": "system", 00:07:57.743 "dma_device_type": 1 00:07:57.743 } 00:07:57.743 ], 00:07:57.743 "driver_specific": { 00:07:57.743 "nvme": [ 00:07:57.743 { 00:07:57.743 "trid": { 00:07:57.743 "trtype": "TCP", 00:07:57.743 "adrfam": "IPv4", 00:07:57.743 "traddr": "10.0.0.2", 00:07:57.743 "trsvcid": "4420", 00:07:57.743 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:57.743 }, 00:07:57.743 "ctrlr_data": { 00:07:57.743 "cntlid": 1, 00:07:57.743 "vendor_id": "0x8086", 00:07:57.743 "model_number": "SPDK bdev Controller", 00:07:57.743 "serial_number": "SPDK0", 00:07:57.743 "firmware_revision": "24.09", 00:07:57.743 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:57.743 "oacs": { 00:07:57.743 "security": 0, 00:07:57.743 "format": 0, 00:07:57.743 "firmware": 0, 00:07:57.743 "ns_manage": 0 00:07:57.743 }, 00:07:57.743 "multi_ctrlr": true, 00:07:57.743 "ana_reporting": false 00:07:57.743 }, 00:07:57.743 "vs": { 00:07:57.743 "nvme_version": "1.3" 00:07:57.743 }, 00:07:57.743 "ns_data": { 00:07:57.743 "id": 1, 00:07:57.743 "can_share": true 00:07:57.743 } 00:07:57.743 } 00:07:57.743 ], 00:07:57.743 "mp_policy": "active_passive" 00:07:57.743 } 00:07:57.743 } 00:07:57.743 ] 00:07:57.743 23:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3290635 00:07:57.743 23:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:57.743 23:45:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:57.743 Running I/O for 10 seconds... 00:07:59.115 Latency(us) 00:07:59.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:59.115 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.115 Nvme0n1 : 1.00 14225.00 55.57 0.00 0.00 0.00 0.00 0.00 00:07:59.115 =================================================================================================================== 00:07:59.115 Total : 14225.00 55.57 0.00 0.00 0.00 0.00 0.00 00:07:59.115 00:07:59.681 23:45:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 42105277-9265-4a97-9083-a30b0bd42141 00:07:59.939 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.939 Nvme0n1 : 2.00 14384.00 56.19 0.00 0.00 0.00 0.00 0.00 00:07:59.939 =================================================================================================================== 00:07:59.939 Total : 14384.00 56.19 0.00 0.00 0.00 0.00 0.00 00:07:59.939 00:07:59.939 true 00:07:59.939 23:45:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42105277-9265-4a97-9083-a30b0bd42141 00:07:59.939 23:45:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:00.197 23:45:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:00.197 23:45:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:00.197 23:45:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3290635 00:08:00.762 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.762 Nvme0n1 : 3.00 14481.33 56.57 0.00 0.00 0.00 0.00 0.00 00:08:00.762 =================================================================================================================== 00:08:00.762 Total : 14481.33 56.57 0.00 0.00 0.00 0.00 0.00 00:08:00.762 00:08:02.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.134 Nvme0n1 : 4.00 14550.25 56.84 0.00 0.00 0.00 0.00 0.00 00:08:02.134 =================================================================================================================== 00:08:02.134 Total : 14550.25 56.84 0.00 0.00 0.00 0.00 0.00 00:08:02.134 00:08:03.065 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.065 Nvme0n1 : 5.00 14638.00 57.18 0.00 0.00 0.00 0.00 0.00 00:08:03.065 =================================================================================================================== 00:08:03.065 Total : 14638.00 57.18 0.00 0.00 0.00 0.00 0.00 00:08:03.065 00:08:04.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.025 Nvme0n1 : 6.00 14678.17 57.34 0.00 0.00 0.00 0.00 0.00 00:08:04.025 =================================================================================================================== 00:08:04.025 Total : 14678.17 57.34 0.00 0.00 0.00 0.00 0.00 00:08:04.025 00:08:04.968 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.968 Nvme0n1 : 7.00 14723.00 57.51 0.00 0.00 0.00 0.00 0.00 00:08:04.968 =================================================================================================================== 00:08:04.968 Total : 14723.00 57.51 0.00 0.00 0.00 0.00 0.00 00:08:04.968 00:08:05.900 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.900 Nvme0n1 : 8.00 14796.12 57.80 0.00 0.00 0.00 0.00 0.00 00:08:05.900 =================================================================================================================== 00:08:05.900 Total : 14796.12 57.80 0.00 0.00 0.00 0.00 0.00 00:08:05.900 00:08:06.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:06.832 Nvme0n1 : 9.00 14860.22 58.05 0.00 0.00 0.00 0.00 0.00 00:08:06.832 =================================================================================================================== 00:08:06.832 Total : 14860.22 58.05 0.00 0.00 0.00 0.00 0.00 00:08:06.832 00:08:07.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.764 Nvme0n1 : 10.00 14917.70 58.27 0.00 0.00 0.00 0.00 0.00 00:08:07.764 =================================================================================================================== 00:08:07.764 Total : 14917.70 58.27 0.00 0.00 0.00 0.00 0.00 00:08:07.764 00:08:07.764 00:08:07.764 Latency(us) 00:08:07.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:07.764 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:07.764 Nvme0n1 : 10.01 14915.84 58.27 0.00 0.00 8575.32 4369.07 19126.80 00:08:07.764 =================================================================================================================== 00:08:07.764 Total : 14915.84 58.27 0.00 0.00 8575.32 4369.07 19126.80 00:08:07.764 0 00:08:07.764 23:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3290546 00:08:07.764 23:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 3290546 ']' 00:08:07.764 23:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 3290546 00:08:07.764 23:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:08:07.764 23:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:07.764 23:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3290546 00:08:08.021 23:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:08.021 23:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:08.021 23:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3290546' 00:08:08.021 killing process with pid 3290546 00:08:08.022 23:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 3290546 00:08:08.022 Received shutdown signal, test time was about 10.000000 seconds 00:08:08.022 00:08:08.022 Latency(us) 00:08:08.022 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:08.022 =================================================================================================================== 00:08:08.022 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:08.022 23:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 3290546 00:08:08.279 23:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:08.536 23:45:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:08.794 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42105277-9265-4a97-9083-a30b0bd42141 00:08:08.794 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3287490 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3287490 00:08:09.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3287490 Killed "${NVMF_APP[@]}" "$@" 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=3291980 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 3291980 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 3291980 ']' 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.052 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.052 [2024-07-24 23:45:39.615588] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:08:09.052 [2024-07-24 23:45:39.615685] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.052 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.310 [2024-07-24 23:45:39.683731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.310 [2024-07-24 23:45:39.791154] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.310 [2024-07-24 23:45:39.791216] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.310 [2024-07-24 23:45:39.791252] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.310 [2024-07-24 23:45:39.791265] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.310 [2024-07-24 23:45:39.791275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.310 [2024-07-24 23:45:39.791303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.310 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.310 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:09.310 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:09.310 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:09.310 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.567 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:09.567 23:45:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.825 [2024-07-24 23:45:40.211929] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:09.825 [2024-07-24 23:45:40.212072] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:09.825 [2024-07-24 23:45:40.212130] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:09.825 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:09.825 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 930a37e0-5a0c-43d5-8b36-080d2247f22f 00:08:09.825 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=930a37e0-5a0c-43d5-8b36-080d2247f22f 00:08:09.825 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:09.825 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:09.825 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:09.825 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:09.825 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:10.082 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 930a37e0-5a0c-43d5-8b36-080d2247f22f -t 2000 00:08:10.339 [ 00:08:10.339 { 00:08:10.339 "name": "930a37e0-5a0c-43d5-8b36-080d2247f22f", 00:08:10.339 "aliases": [ 00:08:10.339 "lvs/lvol" 00:08:10.339 ], 00:08:10.339 "product_name": "Logical Volume", 00:08:10.339 "block_size": 4096, 00:08:10.339 "num_blocks": 38912, 00:08:10.339 "uuid": "930a37e0-5a0c-43d5-8b36-080d2247f22f", 00:08:10.339 "assigned_rate_limits": { 00:08:10.339 "rw_ios_per_sec": 0, 00:08:10.339 "rw_mbytes_per_sec": 0, 00:08:10.339 "r_mbytes_per_sec": 0, 00:08:10.339 "w_mbytes_per_sec": 0 00:08:10.339 }, 00:08:10.340 "claimed": false, 00:08:10.340 "zoned": false, 00:08:10.340 "supported_io_types": { 00:08:10.340 "read": true, 00:08:10.340 "write": true, 00:08:10.340 "unmap": true, 00:08:10.340 "flush": false, 00:08:10.340 "reset": true, 00:08:10.340 "nvme_admin": false, 00:08:10.340 "nvme_io": false, 00:08:10.340 "nvme_io_md": false, 00:08:10.340 "write_zeroes": true, 00:08:10.340 "zcopy": false, 00:08:10.340 "get_zone_info": false, 00:08:10.340 "zone_management": false, 00:08:10.340 "zone_append": false, 00:08:10.340 "compare": false, 00:08:10.340 "compare_and_write": false, 00:08:10.340 "abort": false, 00:08:10.340 "seek_hole": true, 00:08:10.340 "seek_data": true, 00:08:10.340 "copy": false, 00:08:10.340 "nvme_iov_md": false 00:08:10.340 }, 00:08:10.340 "driver_specific": { 00:08:10.340 "lvol": { 00:08:10.340 "lvol_store_uuid": "42105277-9265-4a97-9083-a30b0bd42141", 00:08:10.340 "base_bdev": "aio_bdev", 00:08:10.340 "thin_provision": false, 00:08:10.340 "num_allocated_clusters": 38, 00:08:10.340 "snapshot": false, 00:08:10.340 "clone": false, 00:08:10.340 "esnap_clone": false 00:08:10.340 } 00:08:10.340 } 00:08:10.340 } 00:08:10.340 ] 00:08:10.340 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:10.340 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42105277-9265-4a97-9083-a30b0bd42141 00:08:10.340 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:10.597 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:10.597 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42105277-9265-4a97-9083-a30b0bd42141 00:08:10.597 23:45:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:10.855 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:10.855 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:10.855 [2024-07-24 23:45:41.452766] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:11.111 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42105277-9265-4a97-9083-a30b0bd42141 00:08:11.111 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:08:11.111 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42105277-9265-4a97-9083-a30b0bd42141 00:08:11.111 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.111 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.111 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.111 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.111 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.111 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:11.111 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:11.111 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:11.111 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42105277-9265-4a97-9083-a30b0bd42141 00:08:11.368 request: 00:08:11.369 { 00:08:11.369 "uuid": "42105277-9265-4a97-9083-a30b0bd42141", 00:08:11.369 "method": "bdev_lvol_get_lvstores", 00:08:11.369 "req_id": 1 00:08:11.369 } 00:08:11.369 Got JSON-RPC error response 00:08:11.369 response: 00:08:11.369 { 00:08:11.369 "code": -19, 00:08:11.369 "message": "No such device" 00:08:11.369 } 00:08:11.369 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:08:11.369 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:11.369 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:11.369 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:11.369 23:45:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.626 aio_bdev 00:08:11.626 23:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 930a37e0-5a0c-43d5-8b36-080d2247f22f 00:08:11.626 23:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=930a37e0-5a0c-43d5-8b36-080d2247f22f 00:08:11.626 23:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:11.626 23:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:08:11.626 23:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:11.626 23:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:11.626 23:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:11.883 23:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 930a37e0-5a0c-43d5-8b36-080d2247f22f -t 2000 00:08:12.141 [ 00:08:12.141 { 00:08:12.141 "name": "930a37e0-5a0c-43d5-8b36-080d2247f22f", 00:08:12.141 "aliases": [ 00:08:12.141 "lvs/lvol" 00:08:12.141 ], 00:08:12.141 "product_name": "Logical Volume", 00:08:12.141 "block_size": 4096, 00:08:12.141 "num_blocks": 38912, 00:08:12.141 "uuid": "930a37e0-5a0c-43d5-8b36-080d2247f22f", 00:08:12.141 "assigned_rate_limits": { 00:08:12.141 "rw_ios_per_sec": 0, 00:08:12.141 "rw_mbytes_per_sec": 0, 00:08:12.141 "r_mbytes_per_sec": 0, 00:08:12.141 "w_mbytes_per_sec": 0 00:08:12.141 }, 00:08:12.141 "claimed": false, 00:08:12.141 "zoned": false, 00:08:12.141 "supported_io_types": { 00:08:12.141 "read": true, 00:08:12.141 "write": true, 00:08:12.141 "unmap": true, 00:08:12.141 "flush": false, 00:08:12.141 "reset": true, 00:08:12.141 "nvme_admin": false, 00:08:12.141 "nvme_io": false, 00:08:12.141 "nvme_io_md": false, 00:08:12.141 "write_zeroes": true, 00:08:12.141 "zcopy": false, 00:08:12.141 "get_zone_info": false, 00:08:12.141 "zone_management": false, 00:08:12.141 "zone_append": false, 00:08:12.141 "compare": false, 00:08:12.141 "compare_and_write": false, 00:08:12.141 "abort": false, 00:08:12.141 "seek_hole": true, 00:08:12.141 "seek_data": true, 00:08:12.141 "copy": false, 00:08:12.141 "nvme_iov_md": false 00:08:12.141 }, 00:08:12.141 "driver_specific": { 00:08:12.141 "lvol": { 00:08:12.141 "lvol_store_uuid": "42105277-9265-4a97-9083-a30b0bd42141", 00:08:12.141 "base_bdev": "aio_bdev", 00:08:12.141 "thin_provision": false, 00:08:12.141 "num_allocated_clusters": 38, 00:08:12.141 "snapshot": false, 00:08:12.141 "clone": false, 00:08:12.141 "esnap_clone": false 00:08:12.141 } 00:08:12.141 } 00:08:12.141 } 00:08:12.141 ] 00:08:12.141 23:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:08:12.141 23:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42105277-9265-4a97-9083-a30b0bd42141 00:08:12.141 23:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:12.398 23:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:12.398 23:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42105277-9265-4a97-9083-a30b0bd42141 00:08:12.398 23:45:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:12.656 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:12.656 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 930a37e0-5a0c-43d5-8b36-080d2247f22f 00:08:12.913 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42105277-9265-4a97-9083-a30b0bd42141 00:08:13.170 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:13.427 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:13.427 00:08:13.427 real 0m19.323s 00:08:13.427 user 0m48.597s 00:08:13.427 sys 0m4.687s 00:08:13.427 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.427 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:13.427 ************************************ 00:08:13.427 END TEST lvs_grow_dirty 00:08:13.427 ************************************ 00:08:13.427 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:13.427 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:08:13.427 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:08:13.427 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:13.428 nvmf_trace.0 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:13.428 rmmod nvme_tcp 00:08:13.428 rmmod nvme_fabrics 00:08:13.428 rmmod nvme_keyring 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 3291980 ']' 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 3291980 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 3291980 ']' 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 3291980 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:08:13.428 23:45:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:13.428 23:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3291980 00:08:13.428 23:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:13.428 23:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:13.428 23:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3291980' 00:08:13.428 killing process with pid 3291980 00:08:13.428 23:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 3291980 00:08:13.428 23:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 3291980 00:08:13.685 23:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:13.685 23:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:13.685 23:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:13.685 23:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:13.685 23:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:13.685 23:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:13.685 23:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:13.685 23:45:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:16.215 00:08:16.215 real 0m42.078s 00:08:16.215 user 1m11.078s 00:08:16.215 sys 0m8.590s 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:16.215 ************************************ 00:08:16.215 END TEST nvmf_lvs_grow 00:08:16.215 ************************************ 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:16.215 ************************************ 00:08:16.215 START TEST nvmf_bdev_io_wait 00:08:16.215 ************************************ 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:16.215 * Looking for test storage... 00:08:16.215 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.215 23:45:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:18.113 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:18.113 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:18.114 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:18.114 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:18.114 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:18.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:08:18.114 00:08:18.114 --- 10.0.0.2 ping statistics --- 00:08:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.114 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:08:18.114 00:08:18.114 --- 10.0.0.1 ping statistics --- 00:08:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.114 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=3294516 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 3294516 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 3294516 ']' 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:18.114 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.114 [2024-07-24 23:45:48.664139] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:08:18.114 [2024-07-24 23:45:48.664225] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.114 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.372 [2024-07-24 23:45:48.733939] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.372 [2024-07-24 23:45:48.853337] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.372 [2024-07-24 23:45:48.853399] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.372 [2024-07-24 23:45:48.853416] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.372 [2024-07-24 23:45:48.853429] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.372 [2024-07-24 23:45:48.853442] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.372 [2024-07-24 23:45:48.853521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.372 [2024-07-24 23:45:48.853640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.372 [2024-07-24 23:45:48.853697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.372 [2024-07-24 23:45:48.853700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.372 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.372 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:08:18.372 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.372 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.372 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.372 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.372 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:18.372 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.372 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.372 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.372 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:18.372 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.372 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.629 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.629 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.629 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.629 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.629 [2024-07-24 23:45:48.990990] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.629 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.629 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:18.629 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.630 23:45:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.630 Malloc0 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.630 [2024-07-24 23:45:49.052662] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3294655 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3294657 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:18.630 { 00:08:18.630 "params": { 00:08:18.630 "name": "Nvme$subsystem", 00:08:18.630 "trtype": "$TEST_TRANSPORT", 00:08:18.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.630 "adrfam": "ipv4", 00:08:18.630 "trsvcid": "$NVMF_PORT", 00:08:18.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.630 "hdgst": ${hdgst:-false}, 00:08:18.630 "ddgst": ${ddgst:-false} 00:08:18.630 }, 00:08:18.630 "method": "bdev_nvme_attach_controller" 00:08:18.630 } 00:08:18.630 EOF 00:08:18.630 )") 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3294659 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3294661 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:18.630 { 00:08:18.630 "params": { 00:08:18.630 "name": "Nvme$subsystem", 00:08:18.630 "trtype": "$TEST_TRANSPORT", 00:08:18.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.630 "adrfam": "ipv4", 00:08:18.630 "trsvcid": "$NVMF_PORT", 00:08:18.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.630 "hdgst": ${hdgst:-false}, 00:08:18.630 "ddgst": ${ddgst:-false} 00:08:18.630 }, 00:08:18.630 "method": "bdev_nvme_attach_controller" 00:08:18.630 } 00:08:18.630 EOF 00:08:18.630 )") 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:18.630 { 00:08:18.630 "params": { 00:08:18.630 "name": "Nvme$subsystem", 00:08:18.630 "trtype": "$TEST_TRANSPORT", 00:08:18.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.630 "adrfam": "ipv4", 00:08:18.630 "trsvcid": "$NVMF_PORT", 00:08:18.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.630 "hdgst": ${hdgst:-false}, 00:08:18.630 "ddgst": ${ddgst:-false} 00:08:18.630 }, 00:08:18.630 "method": "bdev_nvme_attach_controller" 00:08:18.630 } 00:08:18.630 EOF 00:08:18.630 )") 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:18.630 { 00:08:18.630 "params": { 00:08:18.630 "name": "Nvme$subsystem", 00:08:18.630 "trtype": "$TEST_TRANSPORT", 00:08:18.630 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.630 "adrfam": "ipv4", 00:08:18.630 "trsvcid": "$NVMF_PORT", 00:08:18.630 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.630 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.630 "hdgst": ${hdgst:-false}, 00:08:18.630 "ddgst": ${ddgst:-false} 00:08:18.630 }, 00:08:18.630 "method": "bdev_nvme_attach_controller" 00:08:18.630 } 00:08:18.630 EOF 00:08:18.630 )") 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3294655 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:18.630 "params": { 00:08:18.630 "name": "Nvme1", 00:08:18.630 "trtype": "tcp", 00:08:18.630 "traddr": "10.0.0.2", 00:08:18.630 "adrfam": "ipv4", 00:08:18.630 "trsvcid": "4420", 00:08:18.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.630 "hdgst": false, 00:08:18.630 "ddgst": false 00:08:18.630 }, 00:08:18.630 "method": "bdev_nvme_attach_controller" 00:08:18.630 }' 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:18.630 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:18.630 "params": { 00:08:18.631 "name": "Nvme1", 00:08:18.631 "trtype": "tcp", 00:08:18.631 "traddr": "10.0.0.2", 00:08:18.631 "adrfam": "ipv4", 00:08:18.631 "trsvcid": "4420", 00:08:18.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.631 "hdgst": false, 00:08:18.631 "ddgst": false 00:08:18.631 }, 00:08:18.631 "method": "bdev_nvme_attach_controller" 00:08:18.631 }' 00:08:18.631 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:18.631 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:18.631 "params": { 00:08:18.631 "name": "Nvme1", 00:08:18.631 "trtype": "tcp", 00:08:18.631 "traddr": "10.0.0.2", 00:08:18.631 "adrfam": "ipv4", 00:08:18.631 "trsvcid": "4420", 00:08:18.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.631 "hdgst": false, 00:08:18.631 "ddgst": false 00:08:18.631 }, 00:08:18.631 "method": "bdev_nvme_attach_controller" 00:08:18.631 }' 00:08:18.631 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:08:18.631 23:45:49 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:18.631 "params": { 00:08:18.631 "name": "Nvme1", 00:08:18.631 "trtype": "tcp", 00:08:18.631 "traddr": "10.0.0.2", 00:08:18.631 "adrfam": "ipv4", 00:08:18.631 "trsvcid": "4420", 00:08:18.631 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.631 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.631 "hdgst": false, 00:08:18.631 "ddgst": false 00:08:18.631 }, 00:08:18.631 "method": "bdev_nvme_attach_controller" 00:08:18.631 }' 00:08:18.631 [2024-07-24 23:45:49.100707] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:08:18.631 [2024-07-24 23:45:49.100707] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:08:18.631 [2024-07-24 23:45:49.100708] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:08:18.631 [2024-07-24 23:45:49.100793] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-24 23:45:49.100794] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 [2024-07-24 23:45:49.100793] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:18.631 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:18.631 --proc-type=auto ] 00:08:18.631 [2024-07-24 23:45:49.101679] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:08:18.631 [2024-07-24 23:45:49.101747] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:18.631 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.888 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.888 [2024-07-24 23:45:49.270317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.888 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.888 [2024-07-24 23:45:49.366894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:08:18.888 [2024-07-24 23:45:49.370866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.888 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.888 [2024-07-24 23:45:49.466718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:18.888 [2024-07-24 23:45:49.487504] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.146 [2024-07-24 23:45:49.546407] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.146 [2024-07-24 23:45:49.594889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:08:19.146 [2024-07-24 23:45:49.640222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:08:19.146 Running I/O for 1 seconds... 00:08:19.403 Running I/O for 1 seconds... 00:08:19.403 Running I/O for 1 seconds... 00:08:19.403 Running I/O for 1 seconds... 00:08:20.335 00:08:20.335 Latency(us) 00:08:20.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.335 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:20.335 Nvme1n1 : 1.02 5859.22 22.89 0.00 0.00 21617.49 9417.77 29127.11 00:08:20.335 =================================================================================================================== 00:08:20.335 Total : 5859.22 22.89 0.00 0.00 21617.49 9417.77 29127.11 00:08:20.335 00:08:20.335 Latency(us) 00:08:20.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.335 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:20.335 Nvme1n1 : 1.01 10156.33 39.67 0.00 0.00 12552.64 6844.87 23690.05 00:08:20.335 =================================================================================================================== 00:08:20.335 Total : 10156.33 39.67 0.00 0.00 12552.64 6844.87 23690.05 00:08:20.335 00:08:20.335 Latency(us) 00:08:20.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.335 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:20.335 Nvme1n1 : 1.01 5851.20 22.86 0.00 0.00 21790.57 6456.51 40195.41 00:08:20.335 =================================================================================================================== 00:08:20.335 Total : 5851.20 22.86 0.00 0.00 21790.57 6456.51 40195.41 00:08:20.335 00:08:20.335 Latency(us) 00:08:20.335 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.335 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:20.335 Nvme1n1 : 1.00 126082.61 492.51 0.00 0.00 1010.96 277.62 2706.39 00:08:20.335 =================================================================================================================== 00:08:20.335 Total : 126082.61 492.51 0.00 0.00 1010.96 277.62 2706.39 00:08:20.592 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3294657 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3294659 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3294661 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:20.849 rmmod nvme_tcp 00:08:20.849 rmmod nvme_fabrics 00:08:20.849 rmmod nvme_keyring 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 3294516 ']' 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 3294516 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 3294516 ']' 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 3294516 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3294516 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3294516' 00:08:20.849 killing process with pid 3294516 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 3294516 00:08:20.849 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 3294516 00:08:21.125 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:21.125 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:21.125 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:21.125 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:21.125 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:21.125 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:21.125 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:21.125 23:45:51 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:23.660 00:08:23.660 real 0m7.322s 00:08:23.660 user 0m16.402s 00:08:23.660 sys 0m3.768s 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:23.660 ************************************ 00:08:23.660 END TEST nvmf_bdev_io_wait 00:08:23.660 ************************************ 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:23.660 ************************************ 00:08:23.660 START TEST nvmf_queue_depth 00:08:23.660 ************************************ 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:23.660 * Looking for test storage... 00:08:23.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.660 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:08:23.661 23:45:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:25.560 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.560 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:25.561 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:25.561 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:25.561 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:08:25.561 00:08:25.561 --- 10.0.0.2 ping statistics --- 00:08:25.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.561 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:08:25.561 00:08:25.561 --- 10.0.0.1 ping statistics --- 00:08:25.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.561 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=3296888 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 3296888 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3296888 ']' 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.561 23:45:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.561 [2024-07-24 23:45:55.884791] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:08:25.561 [2024-07-24 23:45:55.884862] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.561 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.561 [2024-07-24 23:45:55.952107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.561 [2024-07-24 23:45:56.071782] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.561 [2024-07-24 23:45:56.071852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.561 [2024-07-24 23:45:56.071869] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.561 [2024-07-24 23:45:56.071883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.561 [2024-07-24 23:45:56.071894] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.561 [2024-07-24 23:45:56.071926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.820 [2024-07-24 23:45:56.226026] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.820 Malloc0 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.820 [2024-07-24 23:45:56.289564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3296908 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3296908 /var/tmp/bdevperf.sock 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 3296908 ']' 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:25.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.820 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:25.820 [2024-07-24 23:45:56.336362] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:08:25.820 [2024-07-24 23:45:56.336439] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3296908 ] 00:08:25.820 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.820 [2024-07-24 23:45:56.398011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.078 [2024-07-24 23:45:56.515188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.078 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.078 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:08:26.078 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:26.078 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:26.078 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:26.336 NVMe0n1 00:08:26.336 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:26.336 23:45:56 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:26.336 Running I/O for 10 seconds... 00:08:38.533 00:08:38.533 Latency(us) 00:08:38.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.533 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:38.533 Verification LBA range: start 0x0 length 0x4000 00:08:38.533 NVMe0n1 : 10.08 8540.29 33.36 0.00 0.00 119349.33 17476.27 75730.49 00:08:38.533 =================================================================================================================== 00:08:38.533 Total : 8540.29 33.36 0.00 0.00 119349.33 17476.27 75730.49 00:08:38.533 0 00:08:38.533 23:46:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3296908 00:08:38.533 23:46:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3296908 ']' 00:08:38.533 23:46:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3296908 00:08:38.533 23:46:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:38.533 23:46:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:38.533 23:46:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3296908 00:08:38.533 23:46:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:38.533 23:46:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:38.533 23:46:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3296908' 00:08:38.533 killing process with pid 3296908 00:08:38.533 23:46:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3296908 00:08:38.533 Received shutdown signal, test time was about 10.000000 seconds 00:08:38.533 00:08:38.533 Latency(us) 00:08:38.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.533 =================================================================================================================== 00:08:38.533 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:38.533 23:46:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3296908 00:08:38.533 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.534 rmmod nvme_tcp 00:08:38.534 rmmod nvme_fabrics 00:08:38.534 rmmod nvme_keyring 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 3296888 ']' 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 3296888 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 3296888 ']' 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 3296888 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3296888 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3296888' 00:08:38.534 killing process with pid 3296888 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 3296888 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 3296888 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:38.534 23:46:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.099 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:39.099 00:08:39.099 real 0m15.947s 00:08:39.099 user 0m22.640s 00:08:39.099 sys 0m2.924s 00:08:39.099 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:39.099 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:39.099 ************************************ 00:08:39.099 END TEST nvmf_queue_depth 00:08:39.099 ************************************ 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.357 ************************************ 00:08:39.357 START TEST nvmf_target_multipath 00:08:39.357 ************************************ 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:39.357 * Looking for test storage... 00:08:39.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.357 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:08:39.358 23:46:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:41.257 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:41.257 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:41.257 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:41.257 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:41.257 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:41.258 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:41.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:41.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:08:41.516 00:08:41.516 --- 10.0.0.2 ping statistics --- 00:08:41.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.516 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:41.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:41.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:08:41.516 00:08:41.516 --- 10.0.0.1 ping statistics --- 00:08:41.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:41.516 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:41.516 only one NIC for nvmf test 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:41.516 rmmod nvme_tcp 00:08:41.516 rmmod nvme_fabrics 00:08:41.516 rmmod nvme_keyring 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:41.516 23:46:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:44.045 00:08:44.045 real 0m4.299s 00:08:44.045 user 0m0.812s 00:08:44.045 sys 0m1.468s 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:44.045 ************************************ 00:08:44.045 END TEST nvmf_target_multipath 00:08:44.045 ************************************ 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:44.045 ************************************ 00:08:44.045 START TEST nvmf_zcopy 00:08:44.045 ************************************ 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:44.045 * Looking for test storage... 00:08:44.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:08:44.045 23:46:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:45.947 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:45.947 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:45.947 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:45.947 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:45.947 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.947 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:08:45.947 00:08:45.947 --- 10.0.0.2 ping statistics --- 00:08:45.947 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.947 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:08:45.947 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.947 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.948 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:08:45.948 00:08:45.948 --- 10.0.0.1 ping statistics --- 00:08:45.948 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.948 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=3302096 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 3302096 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 3302096 ']' 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:45.948 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:45.948 [2024-07-24 23:46:16.438951] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:08:45.948 [2024-07-24 23:46:16.439038] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.948 EAL: No free 2048 kB hugepages reported on node 1 00:08:45.948 [2024-07-24 23:46:16.501983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.206 [2024-07-24 23:46:16.609361] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.206 [2024-07-24 23:46:16.609421] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.206 [2024-07-24 23:46:16.609434] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.206 [2024-07-24 23:46:16.609445] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.206 [2024-07-24 23:46:16.609455] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.206 [2024-07-24 23:46:16.609483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.206 [2024-07-24 23:46:16.758018] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.206 [2024-07-24 23:46:16.774281] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.206 malloc0 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:46.206 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.487 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:46.487 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:46.487 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:46.487 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:46.487 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:46.487 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:46.487 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:46.487 { 00:08:46.487 "params": { 00:08:46.487 "name": "Nvme$subsystem", 00:08:46.487 "trtype": "$TEST_TRANSPORT", 00:08:46.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.487 "adrfam": "ipv4", 00:08:46.487 "trsvcid": "$NVMF_PORT", 00:08:46.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.487 "hdgst": ${hdgst:-false}, 00:08:46.487 "ddgst": ${ddgst:-false} 00:08:46.487 }, 00:08:46.487 "method": "bdev_nvme_attach_controller" 00:08:46.487 } 00:08:46.487 EOF 00:08:46.487 )") 00:08:46.487 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:46.487 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:46.487 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:46.487 23:46:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:46.487 "params": { 00:08:46.487 "name": "Nvme1", 00:08:46.487 "trtype": "tcp", 00:08:46.487 "traddr": "10.0.0.2", 00:08:46.487 "adrfam": "ipv4", 00:08:46.487 "trsvcid": "4420", 00:08:46.487 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.487 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.487 "hdgst": false, 00:08:46.487 "ddgst": false 00:08:46.487 }, 00:08:46.487 "method": "bdev_nvme_attach_controller" 00:08:46.487 }' 00:08:46.487 [2024-07-24 23:46:16.866023] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:08:46.487 [2024-07-24 23:46:16.866103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3302118 ] 00:08:46.487 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.487 [2024-07-24 23:46:16.934493] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.487 [2024-07-24 23:46:17.057177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.053 Running I/O for 10 seconds... 00:08:57.073 00:08:57.073 Latency(us) 00:08:57.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:57.073 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:57.073 Verification LBA range: start 0x0 length 0x1000 00:08:57.073 Nvme1n1 : 10.01 5775.31 45.12 0.00 0.00 22100.83 470.28 33981.63 00:08:57.073 =================================================================================================================== 00:08:57.073 Total : 5775.31 45.12 0.00 0.00 22100.83 470.28 33981.63 00:08:57.331 23:46:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=3303435 00:08:57.331 23:46:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:57.331 23:46:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:57.331 23:46:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:57.331 23:46:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:57.331 23:46:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:57.331 23:46:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:57.331 23:46:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:57.331 23:46:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:57.331 { 00:08:57.331 "params": { 00:08:57.331 "name": "Nvme$subsystem", 00:08:57.331 "trtype": "$TEST_TRANSPORT", 00:08:57.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:57.331 "adrfam": "ipv4", 00:08:57.331 "trsvcid": "$NVMF_PORT", 00:08:57.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:57.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:57.331 "hdgst": ${hdgst:-false}, 00:08:57.331 "ddgst": ${ddgst:-false} 00:08:57.331 }, 00:08:57.331 "method": "bdev_nvme_attach_controller" 00:08:57.331 } 00:08:57.331 EOF 00:08:57.331 )") 00:08:57.331 23:46:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:57.331 [2024-07-24 23:46:27.700388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.700435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 23:46:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:57.331 23:46:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:57.331 23:46:27 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:57.331 "params": { 00:08:57.331 "name": "Nvme1", 00:08:57.331 "trtype": "tcp", 00:08:57.331 "traddr": "10.0.0.2", 00:08:57.331 "adrfam": "ipv4", 00:08:57.331 "trsvcid": "4420", 00:08:57.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:57.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:57.331 "hdgst": false, 00:08:57.331 "ddgst": false 00:08:57.331 }, 00:08:57.331 "method": "bdev_nvme_attach_controller" 00:08:57.331 }' 00:08:57.331 [2024-07-24 23:46:27.708356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.708383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.716375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.716400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.724395] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.724419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.732417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.732441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.740439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.740463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.740567] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:08:57.331 [2024-07-24 23:46:27.740661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3303435 ] 00:08:57.331 [2024-07-24 23:46:27.748462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.748488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.756482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.756507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.764504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.764528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.331 [2024-07-24 23:46:27.772524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.772549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.780548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.780572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.788570] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.788595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.796593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.796616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.804619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.804644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.805713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.331 [2024-07-24 23:46:27.812665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.812702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.820678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.820714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.828679] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.828705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.836699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.836725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.844720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.844743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.852743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.852768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.860767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.860791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.868791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.868817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.876834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.876870] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.884833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.884857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.892854] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.892879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.900902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.900928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.908897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.908921] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.916920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.916943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.924942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.331 [2024-07-24 23:46:27.924965] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.331 [2024-07-24 23:46:27.928100] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.332 [2024-07-24 23:46:27.932967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.332 [2024-07-24 23:46:27.932992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.332 [2024-07-24 23:46:27.940993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.332 [2024-07-24 23:46:27.941019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.589 [2024-07-24 23:46:27.949037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.589 [2024-07-24 23:46:27.949074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:27.957054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:27.957090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:27.965082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:27.965118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:27.973101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:27.973140] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:27.981126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:27.981164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:27.989157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:27.989196] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:27.997134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:27.997156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.005196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.005257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.013205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.013259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.021238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.021282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.029213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.029265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.037258] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.037279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.045277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.045317] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.053312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.053338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.061343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.061382] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.069349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.069372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.077467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.077492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.085415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.085440] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.093434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.093456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.101439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.101460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.109498] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.109536] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.117497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.117519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.125509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.125532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.133529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.133552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.141565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.141585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.149604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.149625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.157620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.157641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.165627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.165661] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.173684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.173707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.181682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.181722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.189706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.189741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.590 [2024-07-24 23:46:28.197763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.590 [2024-07-24 23:46:28.197785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.205751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.205786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.213786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.213807] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.221797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.221817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.229826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.229851] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.237844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.237864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 Running I/O for 5 seconds... 00:08:57.848 [2024-07-24 23:46:28.245864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.245885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.260210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.260239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.271056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.271083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.283344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.283372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.292788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.292815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.305365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.305400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.317164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.317194] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.330960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.330990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.342117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.342147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.353685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.353715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.365227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.365269] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.376881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.376912] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.388556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.388589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.399682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.399710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.410423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.410451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.421971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.422002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.433425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.433453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.446704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.446734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.848 [2024-07-24 23:46:28.457255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.848 [2024-07-24 23:46:28.457298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.468369] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.468398] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.481702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.481733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.492443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.492470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.503677] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.503708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.515507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.515533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.528426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.528453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.538637] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.538681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.550918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.550948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.562429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.562456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.575681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.575711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.585694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.585724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.597025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.597055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.610399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.610426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.621095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.621125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.632337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.632364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.644761] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.644791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.654346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.654373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.666237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.666290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.677227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.677266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.688659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.688689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.699747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.699777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.106 [2024-07-24 23:46:28.711082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.106 [2024-07-24 23:46:28.711111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.722350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.722377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.733553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.733583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.744845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.744875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.756197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.756226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.767515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.767542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.778744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.778774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.789948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.789978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.801212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.801252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.814294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.814320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.824466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.824493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.836406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.836445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.847411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.847437] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.858818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.858847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.869918] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.869948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.881414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.881441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.892878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.892907] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.904219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.904258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.915921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.915951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.927131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.927161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.940313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.940350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.951096] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.951126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.963211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.963249] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.364 [2024-07-24 23:46:28.974765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.364 [2024-07-24 23:46:28.974795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:28.988086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:28.988117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:28.998524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:28.998568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.009860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.009891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.020930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.020972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.032562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.032605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.044142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.044172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.055492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.055519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.066797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.066826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.078289] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.078319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.089724] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.089754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.101435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.101462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.112778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.112808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.125930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.125960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.137077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.137106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.148448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.148475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.159490] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.159517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.170842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.170871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.182044] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.182074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.193224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.193263] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.204385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.204412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.215881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.215911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.622 [2024-07-24 23:46:29.227182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.622 [2024-07-24 23:46:29.227211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.238687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.238727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.250001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.250030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.261459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.261486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.272860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.272890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.286143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.286173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.296962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.296994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.307435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.307463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.318127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.318154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.331164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.331191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.341209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.341236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.351480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.351508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.361739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.361765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.371915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.371942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.382398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.382425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.392739] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.392766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.405008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.405035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.415090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.415116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.425403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.425429] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.435592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.435619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.445840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.445878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.456357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.456384] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.466884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.466911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.477559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.477586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.880 [2024-07-24 23:46:29.490597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.880 [2024-07-24 23:46:29.490624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.500264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.500291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.510543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.510571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.521525] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.521552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.534071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.534099] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.543876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.543904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.554654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.554692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.567577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.567604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.577583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.577611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.587911] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.587938] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.598492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.598520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.609407] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.609434] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.620328] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.620355] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.632819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.632847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.642667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.642694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.138 [2024-07-24 23:46:29.653474] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.138 [2024-07-24 23:46:29.653507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.139 [2024-07-24 23:46:29.664355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.139 [2024-07-24 23:46:29.664387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.139 [2024-07-24 23:46:29.674436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.139 [2024-07-24 23:46:29.674463] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.139 [2024-07-24 23:46:29.685087] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.139 [2024-07-24 23:46:29.685113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.139 [2024-07-24 23:46:29.695639] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.139 [2024-07-24 23:46:29.695666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.139 [2024-07-24 23:46:29.706066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.139 [2024-07-24 23:46:29.706094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.139 [2024-07-24 23:46:29.719454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.139 [2024-07-24 23:46:29.719481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.139 [2024-07-24 23:46:29.729883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.139 [2024-07-24 23:46:29.729913] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.139 [2024-07-24 23:46:29.741937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.139 [2024-07-24 23:46:29.741967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.397 [2024-07-24 23:46:29.753414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.397 [2024-07-24 23:46:29.753441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.397 [2024-07-24 23:46:29.764887] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.397 [2024-07-24 23:46:29.764916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.397 [2024-07-24 23:46:29.776487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.397 [2024-07-24 23:46:29.776530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.397 [2024-07-24 23:46:29.787778] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.397 [2024-07-24 23:46:29.787808] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.397 [2024-07-24 23:46:29.799149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.397 [2024-07-24 23:46:29.799179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.397 [2024-07-24 23:46:29.810352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.397 [2024-07-24 23:46:29.810380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.397 [2024-07-24 23:46:29.821760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.397 [2024-07-24 23:46:29.821790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.397 [2024-07-24 23:46:29.832828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.397 [2024-07-24 23:46:29.832858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.397 [2024-07-24 23:46:29.844189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:29.844219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.398 [2024-07-24 23:46:29.855642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:29.855672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.398 [2024-07-24 23:46:29.867001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:29.867041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.398 [2024-07-24 23:46:29.878349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:29.878377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.398 [2024-07-24 23:46:29.891332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:29.891359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.398 [2024-07-24 23:46:29.901646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:29.901676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.398 [2024-07-24 23:46:29.913061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:29.913091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.398 [2024-07-24 23:46:29.924646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:29.924675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.398 [2024-07-24 23:46:29.936080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:29.936110] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.398 [2024-07-24 23:46:29.947207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:29.947237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.398 [2024-07-24 23:46:29.958933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:29.958963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.398 [2024-07-24 23:46:29.971959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:29.971989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.398 [2024-07-24 23:46:29.982680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:29.982710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.398 [2024-07-24 23:46:29.994656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:29.994686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.398 [2024-07-24 23:46:30.005504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.398 [2024-07-24 23:46:30.005533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.016449] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.016477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.026715] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.026744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.037256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.037285] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.047615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.047643] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.059460] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.059490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.070995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.071026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.084420] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.084448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.095444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.095472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.107006] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.107036] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.118977] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.119008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.130951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.130981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.142683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.142713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.154062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.154092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.165451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.165478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.178560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.178591] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.189197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.189227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.201084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.201114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.212451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.212478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.223848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.223878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.235134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.235164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.248285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.248316] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.656 [2024-07-24 23:46:30.259077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.656 [2024-07-24 23:46:30.259106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.270179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.270208] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.283477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.283504] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.294414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.294441] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.305682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.305712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.317002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.317032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.328448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.328475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.341506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.341533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.351936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.351966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.363014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.363044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.374800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.374830] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.386068] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.386098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.397921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.397951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.409222] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.409259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.420727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.420757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.432270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.432312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.443618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.443647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.457302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.457329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.468518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.468561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.479773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.479803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.491317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.491344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.502711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.502741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.513774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.914 [2024-07-24 23:46:30.513803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.914 [2024-07-24 23:46:30.524830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.915 [2024-07-24 23:46:30.524860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.536512] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.536558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.547970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.547999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.559663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.559693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.572618] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.572648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.582709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.582738] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.595060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.595090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.606434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.606462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.619405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.619432] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.629815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.629845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.641389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.641416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.652436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.652464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.663748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.663778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.674951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.674982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.686153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.686183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.699070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.699101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.709150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.709179] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.719915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.719942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.732565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.732593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.743063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.743090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.753554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.753581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.764747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.764778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.172 [2024-07-24 23:46:30.776065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.172 [2024-07-24 23:46:30.776095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.789116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.789146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.799366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.799393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.810587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.810617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.823609] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.823639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.834198] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.834228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.845619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.845649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.858627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.858657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.869102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.869132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.880604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.880634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.893829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.893859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.904604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.904633] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.915781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.915811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.928713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.928744] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.939359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.939386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.950291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.950325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.961549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.961580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.972648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.430 [2024-07-24 23:46:30.972678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.430 [2024-07-24 23:46:30.985647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.431 [2024-07-24 23:46:30.985676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.431 [2024-07-24 23:46:30.996575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.431 [2024-07-24 23:46:30.996604] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.431 [2024-07-24 23:46:31.007886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.431 [2024-07-24 23:46:31.007916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.431 [2024-07-24 23:46:31.019150] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.431 [2024-07-24 23:46:31.019181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.431 [2024-07-24 23:46:31.030194] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.431 [2024-07-24 23:46:31.030224] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.043190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.043222] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.053774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.053805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.065928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.065957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.077206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.077236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.088848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.088878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.100422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.100449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.112104] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.112134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.125686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.125716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.136081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.136111] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.147833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.147863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.159593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.159623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.170975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.171013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.182610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.182640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.194103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.194133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.205692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.205722] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.217057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.217086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.230475] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.230503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.240993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.241022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.252505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.252532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.263676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.263706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.274960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.274990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.286088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.286118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.687 [2024-07-24 23:46:31.297427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.687 [2024-07-24 23:46:31.297454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.944 [2024-07-24 23:46:31.308619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.944 [2024-07-24 23:46:31.308650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.944 [2024-07-24 23:46:31.321403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.944 [2024-07-24 23:46:31.321431] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.944 [2024-07-24 23:46:31.331594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.944 [2024-07-24 23:46:31.331641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.944 [2024-07-24 23:46:31.343122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.944 [2024-07-24 23:46:31.343152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.944 [2024-07-24 23:46:31.356459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.944 [2024-07-24 23:46:31.356487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.944 [2024-07-24 23:46:31.366699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.944 [2024-07-24 23:46:31.366728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.944 [2024-07-24 23:46:31.378303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.944 [2024-07-24 23:46:31.378331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.389733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.389771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.400995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.401024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.412384] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.412412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.426161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.426191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.436842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.436872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.447956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.447987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.458971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.459001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.470337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.470364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.481561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.481590] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.492951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.492981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.504305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.504332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.515365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.515392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.528042] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.528072] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.537991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.538021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.945 [2024-07-24 23:46:31.550054] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.945 [2024-07-24 23:46:31.550083] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.561095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.561125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.572488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.572515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.583865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.583895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.595499] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.595526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.606802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.606841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.617884] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.617914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.629332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.629359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.640236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.640289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.651697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.651728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.662844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.662874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.674115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.674145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.685552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.685582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.696912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.696942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.710142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.710171] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.721155] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.721185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.732419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.732446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.744111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.744142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.755703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.202 [2024-07-24 23:46:31.755733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.202 [2024-07-24 23:46:31.767009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.203 [2024-07-24 23:46:31.767039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.203 [2024-07-24 23:46:31.778471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.203 [2024-07-24 23:46:31.778499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.203 [2024-07-24 23:46:31.789805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.203 [2024-07-24 23:46:31.789836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.203 [2024-07-24 23:46:31.803271] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.203 [2024-07-24 23:46:31.803315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.203 [2024-07-24 23:46:31.813942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.203 [2024-07-24 23:46:31.813972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.825683] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.825713] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.836959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.836990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.848733] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.848763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.860301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.860329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.871158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.871189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.882655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.882685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.894114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.894144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.905770] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.905801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.917614] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.917644] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.928896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.928926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.940612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.940642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.952066] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.952097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.963848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.963878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.976673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.976703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.987019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.987049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:31.998214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:31.998252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:32.009813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:32.009843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:32.021466] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:32.021494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:32.032294] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:32.032338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:32.043795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:32.043825] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:32.055608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.460 [2024-07-24 23:46:32.055641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.460 [2024-07-24 23:46:32.069048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.461 [2024-07-24 23:46:32.069078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.079877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.079906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.091497] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.091523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.102711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.102741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.114334] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.114361] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.127782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.127812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.138841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.138871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.150350] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.150377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.161381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.161409] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.172846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.172876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.184400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.184427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.196346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.196373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.208023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.208052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.219345] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.219372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.230504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.230545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.242467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.242494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.254148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.254178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.265293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.265319] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.276925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.276954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.288131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.288161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.299366] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.299393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.311477] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.311505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.718 [2024-07-24 23:46:32.323264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.718 [2024-07-24 23:46:32.323307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.334828] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.334857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.346130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.346160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.357508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.357535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.368978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.369008] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.380529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.380556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.391966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.391996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.405403] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.405430] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.416166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.416195] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.427925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.427955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.439539] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.439582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.452693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.452724] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.463415] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.463442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.474812] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.474842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.487886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.487916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.498015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.498044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.509933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.509963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.520968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.520998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.532296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.532324] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.543261] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.543308] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.555056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.555086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.566790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.566820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.976 [2024-07-24 23:46:32.578048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.976 [2024-07-24 23:46:32.578078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.234 [2024-07-24 23:46:32.589212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.234 [2024-07-24 23:46:32.589251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.234 [2024-07-24 23:46:32.600736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.234 [2024-07-24 23:46:32.600767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.234 [2024-07-24 23:46:32.611932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.234 [2024-07-24 23:46:32.611962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.234 [2024-07-24 23:46:32.623238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.234 [2024-07-24 23:46:32.623290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.234 [2024-07-24 23:46:32.638575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.234 [2024-07-24 23:46:32.638606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.234 [2024-07-24 23:46:32.649480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.234 [2024-07-24 23:46:32.649507] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.234 [2024-07-24 23:46:32.661168] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.234 [2024-07-24 23:46:32.661198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.234 [2024-07-24 23:46:32.672591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.234 [2024-07-24 23:46:32.672621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.234 [2024-07-24 23:46:32.683928] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.234 [2024-07-24 23:46:32.683959] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.234 [2024-07-24 23:46:32.695622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.234 [2024-07-24 23:46:32.695663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.234 [2024-07-24 23:46:32.706838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.235 [2024-07-24 23:46:32.706868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.235 [2024-07-24 23:46:32.718138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.235 [2024-07-24 23:46:32.718168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.235 [2024-07-24 23:46:32.729398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.235 [2024-07-24 23:46:32.729425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.235 [2024-07-24 23:46:32.740426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.235 [2024-07-24 23:46:32.740453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.235 [2024-07-24 23:46:32.751934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.235 [2024-07-24 23:46:32.751964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.235 [2024-07-24 23:46:32.763562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.235 [2024-07-24 23:46:32.763592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.235 [2024-07-24 23:46:32.775322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.235 [2024-07-24 23:46:32.775349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.235 [2024-07-24 23:46:32.788183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.235 [2024-07-24 23:46:32.788213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.235 [2024-07-24 23:46:32.799076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.235 [2024-07-24 23:46:32.799106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.235 [2024-07-24 23:46:32.810347] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.235 [2024-07-24 23:46:32.810375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.235 [2024-07-24 23:46:32.823324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.235 [2024-07-24 23:46:32.823351] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.235 [2024-07-24 23:46:32.833219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.235 [2024-07-24 23:46:32.833257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.235 [2024-07-24 23:46:32.845565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.235 [2024-07-24 23:46:32.845609] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:32.857009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:32.857039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:32.868095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:32.868124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:32.879483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:32.879510] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:32.892659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:32.892689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:32.903832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:32.903862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:32.915035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:32.915073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:32.926583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:32.926614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:32.938000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:32.938030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:32.949419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:32.949447] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:32.960764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:32.960794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:32.972337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:32.972364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:32.983997] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:32.984030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:32.995465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:32.995492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:33.006846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:33.006876] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:33.020213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:33.020256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:33.031362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:33.031390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.493 [2024-07-24 23:46:33.042831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.493 [2024-07-24 23:46:33.042861] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.494 [2024-07-24 23:46:33.054505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.494 [2024-07-24 23:46:33.054548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.494 [2024-07-24 23:46:33.066486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.494 [2024-07-24 23:46:33.066541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.494 [2024-07-24 23:46:33.078061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.494 [2024-07-24 23:46:33.078091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.494 [2024-07-24 23:46:33.089252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.494 [2024-07-24 23:46:33.089296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.494 [2024-07-24 23:46:33.100392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.494 [2024-07-24 23:46:33.100436] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.112012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.112043] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.123148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.123178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.134117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.134156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.145441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.145468] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.157010] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.157040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.168680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.168710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.182360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.182387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.193262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.193304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.204298] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.204337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.215251] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.215296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.226656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.226686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.237661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.237691] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.248952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.248982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.259814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.259843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 00:09:02.751 Latency(us) 00:09:02.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.751 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:02.751 Nvme1n1 : 5.01 11269.15 88.04 0.00 0.00 11342.05 5024.43 22039.51 00:09:02.751 =================================================================================================================== 00:09:02.751 Total : 11269.15 88.04 0.00 0.00 11342.05 5024.43 22039.51 00:09:02.751 [2024-07-24 23:46:33.265324] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.265347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.273344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.273367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.281351] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.281374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.289388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.289422] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.297421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.297466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.305444] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.305488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.751 [2024-07-24 23:46:33.313461] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.751 [2024-07-24 23:46:33.313505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.752 [2024-07-24 23:46:33.321484] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.752 [2024-07-24 23:46:33.321528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.752 [2024-07-24 23:46:33.329522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.752 [2024-07-24 23:46:33.329568] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.752 [2024-07-24 23:46:33.337529] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.752 [2024-07-24 23:46:33.337573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.752 [2024-07-24 23:46:33.345550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.752 [2024-07-24 23:46:33.345594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.752 [2024-07-24 23:46:33.353568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.752 [2024-07-24 23:46:33.353613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.752 [2024-07-24 23:46:33.361597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.752 [2024-07-24 23:46:33.361645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.369626] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.369669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.377644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.377690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.385666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.385710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.393687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.393731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.401706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.401751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.409690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.409714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.417712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.417737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.425730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.425754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.433751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.433775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.441790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.441823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.449831] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.449874] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.457853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.457895] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.465840] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.465865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.473848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.473868] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.481882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.481906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.489906] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.489931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.497969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.498009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.505993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.506037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.513979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.514004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.521994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.522018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 [2024-07-24 23:46:33.530017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:03.009 [2024-07-24 23:46:33.530041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:03.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (3303435) - No such process 00:09:03.009 23:46:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 3303435 00:09:03.009 23:46:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:03.009 23:46:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.009 23:46:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.009 23:46:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.009 23:46:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:03.009 23:46:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.009 23:46:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.009 delay0 00:09:03.009 23:46:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.009 23:46:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:03.009 23:46:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:03.009 23:46:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:03.009 23:46:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:03.009 23:46:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:03.009 EAL: No free 2048 kB hugepages reported on node 1 00:09:03.009 [2024-07-24 23:46:33.616285] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:11.114 Initializing NVMe Controllers 00:09:11.114 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:11.114 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:11.114 Initialization complete. Launching workers. 00:09:11.114 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 240, failed: 20505 00:09:11.114 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 20622, failed to submit 123 00:09:11.114 success 20530, unsuccess 92, failed 0 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.114 rmmod nvme_tcp 00:09:11.114 rmmod nvme_fabrics 00:09:11.114 rmmod nvme_keyring 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 3302096 ']' 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 3302096 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 3302096 ']' 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 3302096 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3302096 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3302096' 00:09:11.114 killing process with pid 3302096 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 3302096 00:09:11.114 23:46:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 3302096 00:09:11.114 23:46:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.114 23:46:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.114 23:46:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.114 23:46:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.114 23:46:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.114 23:46:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.114 23:46:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.114 23:46:41 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:13.018 00:09:13.018 real 0m29.166s 00:09:13.018 user 0m42.136s 00:09:13.018 sys 0m9.784s 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:13.018 ************************************ 00:09:13.018 END TEST nvmf_zcopy 00:09:13.018 ************************************ 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:13.018 ************************************ 00:09:13.018 START TEST nvmf_nmic 00:09:13.018 ************************************ 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:13.018 * Looking for test storage... 00:09:13.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:13.018 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:13.019 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:13.019 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:09:13.019 23:46:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:14.918 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:14.918 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:14.918 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:14.918 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.918 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:14.919 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.919 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:09:14.919 00:09:14.919 --- 10.0.0.2 ping statistics --- 00:09:14.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.919 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.919 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.919 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:09:14.919 00:09:14.919 --- 10.0.0.1 ping statistics --- 00:09:14.919 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.919 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:14.919 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:15.176 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:15.176 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:15.176 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:15.176 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.176 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=3306945 00:09:15.176 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:15.176 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 3306945 00:09:15.176 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 3306945 ']' 00:09:15.176 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.176 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:15.176 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.176 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:15.176 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.176 [2024-07-24 23:46:45.581638] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:09:15.176 [2024-07-24 23:46:45.581732] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:15.176 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.176 [2024-07-24 23:46:45.647879] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:15.176 [2024-07-24 23:46:45.760062] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:15.176 [2024-07-24 23:46:45.760134] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:15.176 [2024-07-24 23:46:45.760162] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:15.176 [2024-07-24 23:46:45.760173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:15.176 [2024-07-24 23:46:45.760183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:15.176 [2024-07-24 23:46:45.760277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.176 [2024-07-24 23:46:45.760347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:15.176 [2024-07-24 23:46:45.760414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:15.176 [2024-07-24 23:46:45.760416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 [2024-07-24 23:46:45.919730] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 Malloc0 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 [2024-07-24 23:46:45.973335] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:15.434 test case1: single bdev can't be used in multiple subsystems 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.434 23:46:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 [2024-07-24 23:46:45.997145] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:15.434 [2024-07-24 23:46:45.997173] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:15.434 [2024-07-24 23:46:45.997202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:15.434 request: 00:09:15.434 { 00:09:15.434 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:15.434 "namespace": { 00:09:15.434 "bdev_name": "Malloc0", 00:09:15.434 "no_auto_visible": false 00:09:15.434 }, 00:09:15.434 "method": "nvmf_subsystem_add_ns", 00:09:15.434 "req_id": 1 00:09:15.434 } 00:09:15.434 Got JSON-RPC error response 00:09:15.434 response: 00:09:15.434 { 00:09:15.434 "code": -32602, 00:09:15.434 "message": "Invalid parameters" 00:09:15.434 } 00:09:15.434 23:46:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:09:15.434 23:46:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:15.434 23:46:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:15.434 23:46:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:15.434 Adding namespace failed - expected result. 00:09:15.434 23:46:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:15.434 test case2: host connect to nvmf target in multiple paths 00:09:15.434 23:46:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:15.434 23:46:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:15.434 23:46:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:15.434 [2024-07-24 23:46:46.005259] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:15.434 23:46:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:15.434 23:46:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:16.367 23:46:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:16.932 23:46:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:16.932 23:46:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1196 -- # local i=0 00:09:16.932 23:46:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:09:16.932 23:46:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:09:16.932 23:46:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # sleep 2 00:09:18.868 23:46:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:09:18.868 23:46:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:09:18.868 23:46:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:09:18.868 23:46:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:09:18.868 23:46:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:09:18.868 23:46:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # return 0 00:09:18.868 23:46:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:18.868 [global] 00:09:18.868 thread=1 00:09:18.868 invalidate=1 00:09:18.868 rw=write 00:09:18.868 time_based=1 00:09:18.868 runtime=1 00:09:18.868 ioengine=libaio 00:09:18.868 direct=1 00:09:18.868 bs=4096 00:09:18.868 iodepth=1 00:09:18.868 norandommap=0 00:09:18.868 numjobs=1 00:09:18.868 00:09:18.868 verify_dump=1 00:09:18.868 verify_backlog=512 00:09:18.868 verify_state_save=0 00:09:18.868 do_verify=1 00:09:18.868 verify=crc32c-intel 00:09:18.868 [job0] 00:09:18.868 filename=/dev/nvme0n1 00:09:18.868 Could not set queue depth (nvme0n1) 00:09:19.124 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:19.124 fio-3.35 00:09:19.124 Starting 1 thread 00:09:20.494 00:09:20.494 job0: (groupid=0, jobs=1): err= 0: pid=3307481: Wed Jul 24 23:46:50 2024 00:09:20.494 read: IOPS=512, BW=2049KiB/s (2098kB/s)(2100KiB/1025msec) 00:09:20.494 slat (nsec): min=6618, max=65794, avg=12742.60, stdev=6306.05 00:09:20.494 clat (usec): min=272, max=42023, avg=1441.77, stdev=6681.64 00:09:20.494 lat (usec): min=279, max=42056, avg=1454.52, stdev=6683.34 00:09:20.494 clat percentiles (usec): 00:09:20.494 | 1.00th=[ 277], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 310], 00:09:20.494 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 334], 00:09:20.494 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 400], 95.00th=[ 461], 00:09:20.494 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:20.494 | 99.99th=[42206] 00:09:20.494 write: IOPS=999, BW=3996KiB/s (4092kB/s)(4096KiB/1025msec); 0 zone resets 00:09:20.494 slat (usec): min=7, max=31377, avg=47.19, stdev=980.06 00:09:20.494 clat (usec): min=161, max=2624, avg=201.80, stdev=81.65 00:09:20.494 lat (usec): min=170, max=31609, avg=248.99, stdev=984.48 00:09:20.494 clat percentiles (usec): 00:09:20.494 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:09:20.494 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 200], 00:09:20.494 | 70.00th=[ 204], 80.00th=[ 212], 90.00th=[ 229], 95.00th=[ 258], 00:09:20.494 | 99.00th=[ 330], 99.50th=[ 347], 99.90th=[ 449], 99.95th=[ 2638], 00:09:20.494 | 99.99th=[ 2638] 00:09:20.494 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:09:20.494 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:20.494 lat (usec) : 250=62.30%, 500=36.35%, 750=0.39% 00:09:20.494 lat (msec) : 4=0.06%, 50=0.90% 00:09:20.494 cpu : usr=1.56%, sys=2.83%, ctx=1552, majf=0, minf=2 00:09:20.494 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:20.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:20.494 issued rwts: total=525,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:20.494 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:20.494 00:09:20.494 Run status group 0 (all jobs): 00:09:20.494 READ: bw=2049KiB/s (2098kB/s), 2049KiB/s-2049KiB/s (2098kB/s-2098kB/s), io=2100KiB (2150kB), run=1025-1025msec 00:09:20.494 WRITE: bw=3996KiB/s (4092kB/s), 3996KiB/s-3996KiB/s (4092kB/s-4092kB/s), io=4096KiB (4194kB), run=1025-1025msec 00:09:20.494 00:09:20.494 Disk stats (read/write): 00:09:20.494 nvme0n1: ios=573/1024, merge=0/0, ticks=866/200, in_queue=1066, util=98.80% 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:20.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1217 -- # local i=0 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # return 0 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:20.494 rmmod nvme_tcp 00:09:20.494 rmmod nvme_fabrics 00:09:20.494 rmmod nvme_keyring 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 3306945 ']' 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 3306945 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 3306945 ']' 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 3306945 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3306945 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3306945' 00:09:20.494 killing process with pid 3306945 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 3306945 00:09:20.494 23:46:50 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 3306945 00:09:20.753 23:46:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.753 23:46:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:20.753 23:46:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:20.753 23:46:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.753 23:46:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.753 23:46:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.753 23:46:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.753 23:46:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:23.282 00:09:23.282 real 0m9.966s 00:09:23.282 user 0m22.623s 00:09:23.282 sys 0m2.281s 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:23.282 ************************************ 00:09:23.282 END TEST nvmf_nmic 00:09:23.282 ************************************ 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:23.282 ************************************ 00:09:23.282 START TEST nvmf_fio_target 00:09:23.282 ************************************ 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:23.282 * Looking for test storage... 00:09:23.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:23.282 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:09:23.283 23:46:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:25.182 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:25.182 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:25.182 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:25.182 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:25.182 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:25.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:25.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:09:25.183 00:09:25.183 --- 10.0.0.2 ping statistics --- 00:09:25.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.183 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:25.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:25.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:09:25.183 00:09:25.183 --- 10.0.0.1 ping statistics --- 00:09:25.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:25.183 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=3309560 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 3309560 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 3309560 ']' 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.183 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.183 [2024-07-24 23:46:55.679155] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:09:25.183 [2024-07-24 23:46:55.679264] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:25.183 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.183 [2024-07-24 23:46:55.755382] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:25.441 [2024-07-24 23:46:55.879885] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.441 [2024-07-24 23:46:55.879945] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.441 [2024-07-24 23:46:55.879962] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.441 [2024-07-24 23:46:55.879975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.441 [2024-07-24 23:46:55.879987] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.441 [2024-07-24 23:46:55.880066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.441 [2024-07-24 23:46:55.880122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.441 [2024-07-24 23:46:55.880146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:25.441 [2024-07-24 23:46:55.880149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.441 23:46:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:25.441 23:46:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:09:25.441 23:46:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:25.441 23:46:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:25.441 23:46:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.441 23:46:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.441 23:46:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:25.699 [2024-07-24 23:46:56.243204] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.699 23:46:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:25.956 23:46:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:25.956 23:46:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.213 23:46:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:26.470 23:46:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.727 23:46:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:26.727 23:46:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:26.985 23:46:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:26.985 23:46:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:27.242 23:46:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.500 23:46:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:27.500 23:46:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:27.757 23:46:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:27.757 23:46:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:28.015 23:46:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:28.015 23:46:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:28.272 23:46:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:28.529 23:46:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:28.529 23:46:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:28.785 23:46:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:28.786 23:46:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:29.043 23:46:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:29.043 [2024-07-24 23:46:59.623068] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:29.043 23:46:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:29.300 23:46:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:29.557 23:47:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:30.122 23:47:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:30.122 23:47:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local i=0 00:09:30.122 23:47:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:09:30.122 23:47:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # [[ -n 4 ]] 00:09:30.122 23:47:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # nvme_device_counter=4 00:09:30.122 23:47:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # sleep 2 00:09:32.646 23:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:09:32.646 23:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:09:32.646 23:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:09:32.646 23:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_devices=4 00:09:32.646 23:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:09:32.646 23:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # return 0 00:09:32.646 23:47:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:32.646 [global] 00:09:32.646 thread=1 00:09:32.646 invalidate=1 00:09:32.646 rw=write 00:09:32.646 time_based=1 00:09:32.646 runtime=1 00:09:32.646 ioengine=libaio 00:09:32.646 direct=1 00:09:32.646 bs=4096 00:09:32.646 iodepth=1 00:09:32.646 norandommap=0 00:09:32.646 numjobs=1 00:09:32.646 00:09:32.646 verify_dump=1 00:09:32.646 verify_backlog=512 00:09:32.646 verify_state_save=0 00:09:32.646 do_verify=1 00:09:32.646 verify=crc32c-intel 00:09:32.646 [job0] 00:09:32.646 filename=/dev/nvme0n1 00:09:32.646 [job1] 00:09:32.646 filename=/dev/nvme0n2 00:09:32.646 [job2] 00:09:32.646 filename=/dev/nvme0n3 00:09:32.646 [job3] 00:09:32.646 filename=/dev/nvme0n4 00:09:32.646 Could not set queue depth (nvme0n1) 00:09:32.646 Could not set queue depth (nvme0n2) 00:09:32.646 Could not set queue depth (nvme0n3) 00:09:32.646 Could not set queue depth (nvme0n4) 00:09:32.646 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.646 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.646 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.646 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:32.646 fio-3.35 00:09:32.646 Starting 4 threads 00:09:34.016 00:09:34.016 job0: (groupid=0, jobs=1): err= 0: pid=3310631: Wed Jul 24 23:47:04 2024 00:09:34.016 read: IOPS=59, BW=236KiB/s (242kB/s)(240KiB/1015msec) 00:09:34.017 slat (nsec): min=5868, max=50923, avg=19407.90, stdev=10110.77 00:09:34.017 clat (usec): min=290, max=41110, avg=14587.37, stdev=19251.65 00:09:34.017 lat (usec): min=304, max=41117, avg=14606.78, stdev=19251.47 00:09:34.017 clat percentiles (usec): 00:09:34.017 | 1.00th=[ 289], 5.00th=[ 302], 10.00th=[ 318], 20.00th=[ 371], 00:09:34.017 | 30.00th=[ 379], 40.00th=[ 392], 50.00th=[ 416], 60.00th=[ 486], 00:09:34.017 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:34.017 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:34.017 | 99.99th=[41157] 00:09:34.017 write: IOPS=504, BW=2018KiB/s (2066kB/s)(2048KiB/1015msec); 0 zone resets 00:09:34.017 slat (nsec): min=7512, max=66419, avg=19678.14, stdev=10122.94 00:09:34.017 clat (usec): min=182, max=502, avg=245.76, stdev=47.69 00:09:34.017 lat (usec): min=191, max=516, avg=265.44, stdev=47.16 00:09:34.017 clat percentiles (usec): 00:09:34.017 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 210], 00:09:34.017 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 241], 00:09:34.017 | 70.00th=[ 253], 80.00th=[ 281], 90.00th=[ 318], 95.00th=[ 338], 00:09:34.017 | 99.00th=[ 416], 99.50th=[ 478], 99.90th=[ 502], 99.95th=[ 502], 00:09:34.017 | 99.99th=[ 502] 00:09:34.017 bw ( KiB/s): min= 4096, max= 4096, per=41.12%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.017 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.017 lat (usec) : 250=60.66%, 500=34.97%, 750=0.52% 00:09:34.017 lat (msec) : 10=0.17%, 50=3.67% 00:09:34.017 cpu : usr=0.49%, sys=1.08%, ctx=572, majf=0, minf=1 00:09:34.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.017 issued rwts: total=60,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.017 job1: (groupid=0, jobs=1): err= 0: pid=3310632: Wed Jul 24 23:47:04 2024 00:09:34.017 read: IOPS=200, BW=800KiB/s (819kB/s)(808KiB/1010msec) 00:09:34.017 slat (nsec): min=5869, max=46190, avg=17768.69, stdev=10151.66 00:09:34.017 clat (usec): min=280, max=41060, avg=4290.62, stdev=11829.89 00:09:34.017 lat (usec): min=292, max=41068, avg=4308.38, stdev=11830.66 00:09:34.017 clat percentiles (usec): 00:09:34.017 | 1.00th=[ 297], 5.00th=[ 343], 10.00th=[ 359], 20.00th=[ 392], 00:09:34.017 | 30.00th=[ 433], 40.00th=[ 453], 50.00th=[ 465], 60.00th=[ 490], 00:09:34.017 | 70.00th=[ 506], 80.00th=[ 537], 90.00th=[ 1696], 95.00th=[41157], 00:09:34.017 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:34.017 | 99.99th=[41157] 00:09:34.017 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:09:34.017 slat (nsec): min=7338, max=72227, avg=17237.50, stdev=10220.71 00:09:34.017 clat (usec): min=175, max=1077, avg=247.47, stdev=62.85 00:09:34.017 lat (usec): min=196, max=1102, avg=264.71, stdev=64.31 00:09:34.017 clat percentiles (usec): 00:09:34.017 | 1.00th=[ 188], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 208], 00:09:34.017 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239], 00:09:34.017 | 70.00th=[ 255], 80.00th=[ 277], 90.00th=[ 326], 95.00th=[ 371], 00:09:34.017 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 1074], 99.95th=[ 1074], 00:09:34.017 | 99.99th=[ 1074] 00:09:34.017 bw ( KiB/s): min= 4096, max= 4096, per=41.12%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.017 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.017 lat (usec) : 250=48.88%, 500=41.04%, 750=6.86%, 1000=0.14% 00:09:34.017 lat (msec) : 2=0.28%, 10=0.14%, 50=2.66% 00:09:34.017 cpu : usr=0.59%, sys=1.29%, ctx=714, majf=0, minf=1 00:09:34.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.017 issued rwts: total=202,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.017 job2: (groupid=0, jobs=1): err= 0: pid=3310633: Wed Jul 24 23:47:04 2024 00:09:34.017 read: IOPS=20, BW=83.1KiB/s (85.1kB/s)(84.0KiB/1011msec) 00:09:34.017 slat (nsec): min=8786, max=36112, avg=20494.10, stdev=10023.94 00:09:34.017 clat (usec): min=40901, max=41069, avg=40976.81, stdev=45.51 00:09:34.017 lat (usec): min=40918, max=41084, avg=40997.30, stdev=42.17 00:09:34.017 clat percentiles (usec): 00:09:34.017 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:34.017 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:34.017 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:34.017 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:34.017 | 99.99th=[41157] 00:09:34.017 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:09:34.017 slat (nsec): min=9342, max=53979, avg=17735.04, stdev=8608.70 00:09:34.017 clat (usec): min=200, max=1216, avg=269.11, stdev=53.65 00:09:34.017 lat (usec): min=211, max=1226, avg=286.85, stdev=55.00 00:09:34.017 clat percentiles (usec): 00:09:34.017 | 1.00th=[ 210], 5.00th=[ 225], 10.00th=[ 233], 20.00th=[ 241], 00:09:34.017 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:09:34.017 | 70.00th=[ 277], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[ 334], 00:09:34.017 | 99.00th=[ 363], 99.50th=[ 424], 99.90th=[ 1221], 99.95th=[ 1221], 00:09:34.017 | 99.99th=[ 1221] 00:09:34.017 bw ( KiB/s): min= 4096, max= 4096, per=41.12%, avg=4096.00, stdev= 0.00, samples=1 00:09:34.017 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:34.017 lat (usec) : 250=29.64%, 500=66.23% 00:09:34.017 lat (msec) : 2=0.19%, 50=3.94% 00:09:34.017 cpu : usr=0.99%, sys=0.79%, ctx=533, majf=0, minf=1 00:09:34.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.017 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.017 job3: (groupid=0, jobs=1): err= 0: pid=3310634: Wed Jul 24 23:47:04 2024 00:09:34.017 read: IOPS=505, BW=2023KiB/s (2072kB/s)(2080KiB/1028msec) 00:09:34.017 slat (nsec): min=6276, max=65967, avg=20980.38, stdev=11549.22 00:09:34.017 clat (usec): min=258, max=41401, avg=1470.74, stdev=6579.12 00:09:34.017 lat (usec): min=265, max=41408, avg=1491.72, stdev=6579.39 00:09:34.017 clat percentiles (usec): 00:09:34.017 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 310], 00:09:34.017 | 30.00th=[ 326], 40.00th=[ 351], 50.00th=[ 367], 60.00th=[ 375], 00:09:34.017 | 70.00th=[ 404], 80.00th=[ 449], 90.00th=[ 502], 95.00th=[ 537], 00:09:34.017 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:34.017 | 99.99th=[41157] 00:09:34.017 write: IOPS=996, BW=3984KiB/s (4080kB/s)(4096KiB/1028msec); 0 zone resets 00:09:34.017 slat (usec): min=6, max=1637, avg=13.64, stdev=51.27 00:09:34.017 clat (usec): min=159, max=1170, avg=207.86, stdev=59.74 00:09:34.017 lat (usec): min=166, max=1895, avg=221.50, stdev=82.08 00:09:34.017 clat percentiles (usec): 00:09:34.017 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:09:34.017 | 30.00th=[ 176], 40.00th=[ 182], 50.00th=[ 194], 60.00th=[ 210], 00:09:34.017 | 70.00th=[ 219], 80.00th=[ 233], 90.00th=[ 253], 95.00th=[ 285], 00:09:34.017 | 99.00th=[ 416], 99.50th=[ 429], 99.90th=[ 857], 99.95th=[ 1172], 00:09:34.017 | 99.99th=[ 1172] 00:09:34.017 bw ( KiB/s): min= 4096, max= 4096, per=41.12%, avg=4096.00, stdev= 0.00, samples=2 00:09:34.017 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:09:34.017 lat (usec) : 250=59.07%, 500=37.11%, 750=2.66%, 1000=0.19% 00:09:34.017 lat (msec) : 2=0.06%, 50=0.91% 00:09:34.017 cpu : usr=1.85%, sys=1.66%, ctx=1546, majf=0, minf=2 00:09:34.017 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:34.017 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.017 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.017 issued rwts: total=520,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.017 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:34.017 00:09:34.017 Run status group 0 (all jobs): 00:09:34.017 READ: bw=3125KiB/s (3200kB/s), 83.1KiB/s-2023KiB/s (85.1kB/s-2072kB/s), io=3212KiB (3289kB), run=1010-1028msec 00:09:34.017 WRITE: bw=9961KiB/s (10.2MB/s), 2018KiB/s-3984KiB/s (2066kB/s-4080kB/s), io=10.0MiB (10.5MB), run=1010-1028msec 00:09:34.017 00:09:34.017 Disk stats (read/write): 00:09:34.017 nvme0n1: ios=105/512, merge=0/0, ticks=909/125, in_queue=1034, util=89.58% 00:09:34.017 nvme0n2: ios=219/512, merge=0/0, ticks=1566/125, in_queue=1691, util=90.96% 00:09:34.017 nvme0n3: ios=74/512, merge=0/0, ticks=1508/133, in_queue=1641, util=93.32% 00:09:34.017 nvme0n4: ios=572/1024, merge=0/0, ticks=647/207, in_queue=854, util=91.90% 00:09:34.017 23:47:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:34.017 [global] 00:09:34.017 thread=1 00:09:34.017 invalidate=1 00:09:34.017 rw=randwrite 00:09:34.017 time_based=1 00:09:34.017 runtime=1 00:09:34.017 ioengine=libaio 00:09:34.017 direct=1 00:09:34.017 bs=4096 00:09:34.017 iodepth=1 00:09:34.017 norandommap=0 00:09:34.017 numjobs=1 00:09:34.017 00:09:34.017 verify_dump=1 00:09:34.017 verify_backlog=512 00:09:34.017 verify_state_save=0 00:09:34.017 do_verify=1 00:09:34.017 verify=crc32c-intel 00:09:34.017 [job0] 00:09:34.017 filename=/dev/nvme0n1 00:09:34.017 [job1] 00:09:34.017 filename=/dev/nvme0n2 00:09:34.017 [job2] 00:09:34.017 filename=/dev/nvme0n3 00:09:34.017 [job3] 00:09:34.017 filename=/dev/nvme0n4 00:09:34.017 Could not set queue depth (nvme0n1) 00:09:34.017 Could not set queue depth (nvme0n2) 00:09:34.017 Could not set queue depth (nvme0n3) 00:09:34.017 Could not set queue depth (nvme0n4) 00:09:34.017 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.018 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.018 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.018 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:34.018 fio-3.35 00:09:34.018 Starting 4 threads 00:09:35.388 00:09:35.388 job0: (groupid=0, jobs=1): err= 0: pid=3310866: Wed Jul 24 23:47:05 2024 00:09:35.388 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:35.388 slat (nsec): min=4423, max=78958, avg=18358.06, stdev=10628.64 00:09:35.388 clat (usec): min=245, max=625, avg=361.15, stdev=77.90 00:09:35.388 lat (usec): min=252, max=639, avg=379.51, stdev=80.52 00:09:35.388 clat percentiles (usec): 00:09:35.388 | 1.00th=[ 258], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 289], 00:09:35.389 | 30.00th=[ 306], 40.00th=[ 326], 50.00th=[ 351], 60.00th=[ 367], 00:09:35.389 | 70.00th=[ 379], 80.00th=[ 420], 90.00th=[ 486], 95.00th=[ 523], 00:09:35.389 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 627], 99.95th=[ 627], 00:09:35.389 | 99.99th=[ 627] 00:09:35.389 write: IOPS=1828, BW=7313KiB/s (7488kB/s)(7320KiB/1001msec); 0 zone resets 00:09:35.389 slat (nsec): min=5617, max=38701, avg=11064.51, stdev=4998.47 00:09:35.389 clat (usec): min=155, max=1240, avg=208.99, stdev=49.66 00:09:35.389 lat (usec): min=161, max=1250, avg=220.06, stdev=50.03 00:09:35.389 clat percentiles (usec): 00:09:35.389 | 1.00th=[ 163], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:09:35.389 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 198], 60.00th=[ 204], 00:09:35.389 | 70.00th=[ 210], 80.00th=[ 221], 90.00th=[ 251], 95.00th=[ 302], 00:09:35.389 | 99.00th=[ 375], 99.50th=[ 388], 99.90th=[ 832], 99.95th=[ 1237], 00:09:35.389 | 99.99th=[ 1237] 00:09:35.389 bw ( KiB/s): min= 8192, max= 8192, per=39.33%, avg=8192.00, stdev= 0.00, samples=1 00:09:35.389 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:35.389 lat (usec) : 250=48.87%, 500=47.53%, 750=3.54%, 1000=0.03% 00:09:35.389 lat (msec) : 2=0.03% 00:09:35.389 cpu : usr=3.00%, sys=4.80%, ctx=3367, majf=0, minf=1 00:09:35.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.389 issued rwts: total=1536,1830,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.389 job1: (groupid=0, jobs=1): err= 0: pid=3310867: Wed Jul 24 23:47:05 2024 00:09:35.389 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:35.389 slat (nsec): min=5265, max=59967, avg=12670.00, stdev=6706.87 00:09:35.389 clat (usec): min=243, max=686, avg=346.59, stdev=88.83 00:09:35.389 lat (usec): min=249, max=703, avg=359.26, stdev=90.69 00:09:35.389 clat percentiles (usec): 00:09:35.389 | 1.00th=[ 258], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 281], 00:09:35.389 | 30.00th=[ 293], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 322], 00:09:35.389 | 70.00th=[ 347], 80.00th=[ 420], 90.00th=[ 486], 95.00th=[ 537], 00:09:35.389 | 99.00th=[ 635], 99.50th=[ 644], 99.90th=[ 668], 99.95th=[ 685], 00:09:35.389 | 99.99th=[ 685] 00:09:35.389 write: IOPS=1954, BW=7816KiB/s (8004kB/s)(7824KiB/1001msec); 0 zone resets 00:09:35.389 slat (nsec): min=6726, max=53997, avg=14366.87, stdev=6773.06 00:09:35.389 clat (usec): min=155, max=420, avg=207.58, stdev=21.79 00:09:35.389 lat (usec): min=163, max=430, avg=221.94, stdev=25.94 00:09:35.389 clat percentiles (usec): 00:09:35.389 | 1.00th=[ 167], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 186], 00:09:35.389 | 30.00th=[ 196], 40.00th=[ 204], 50.00th=[ 210], 60.00th=[ 217], 00:09:35.389 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 231], 95.00th=[ 239], 00:09:35.389 | 99.00th=[ 255], 99.50th=[ 277], 99.90th=[ 334], 99.95th=[ 420], 00:09:35.389 | 99.99th=[ 420] 00:09:35.389 bw ( KiB/s): min= 8192, max= 8192, per=39.33%, avg=8192.00, stdev= 0.00, samples=1 00:09:35.389 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:35.389 lat (usec) : 250=55.07%, 500=41.35%, 750=3.58% 00:09:35.389 cpu : usr=3.60%, sys=6.60%, ctx=3492, majf=0, minf=2 00:09:35.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.389 issued rwts: total=1536,1956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.389 job2: (groupid=0, jobs=1): err= 0: pid=3310868: Wed Jul 24 23:47:05 2024 00:09:35.389 read: IOPS=22, BW=90.0KiB/s (92.2kB/s)(92.0KiB/1022msec) 00:09:35.389 slat (nsec): min=7904, max=39753, avg=21929.65, stdev=10530.01 00:09:35.389 clat (usec): min=345, max=42047, avg=37728.31, stdev=11802.12 00:09:35.389 lat (usec): min=373, max=42066, avg=37750.24, stdev=11802.33 00:09:35.389 clat percentiles (usec): 00:09:35.389 | 1.00th=[ 347], 5.00th=[ 363], 10.00th=[40633], 20.00th=[41157], 00:09:35.389 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:35.389 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:35.389 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:35.389 | 99.99th=[42206] 00:09:35.389 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:09:35.389 slat (nsec): min=8004, max=53995, avg=13438.25, stdev=5833.64 00:09:35.389 clat (usec): min=184, max=1325, avg=281.33, stdev=93.61 00:09:35.389 lat (usec): min=192, max=1336, avg=294.76, stdev=94.28 00:09:35.389 clat percentiles (usec): 00:09:35.389 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 208], 20.00th=[ 221], 00:09:35.389 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 253], 60.00th=[ 277], 00:09:35.389 | 70.00th=[ 314], 80.00th=[ 343], 90.00th=[ 367], 95.00th=[ 396], 00:09:35.389 | 99.00th=[ 478], 99.50th=[ 955], 99.90th=[ 1319], 99.95th=[ 1319], 00:09:35.389 | 99.99th=[ 1319] 00:09:35.389 bw ( KiB/s): min= 4096, max= 4096, per=19.66%, avg=4096.00, stdev= 0.00, samples=1 00:09:35.389 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:35.389 lat (usec) : 250=46.36%, 500=48.97%, 1000=0.37% 00:09:35.389 lat (msec) : 2=0.37%, 50=3.93% 00:09:35.389 cpu : usr=0.49%, sys=0.88%, ctx=539, majf=0, minf=1 00:09:35.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.389 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.389 job3: (groupid=0, jobs=1): err= 0: pid=3310869: Wed Jul 24 23:47:05 2024 00:09:35.389 read: IOPS=538, BW=2154KiB/s (2206kB/s)(2156KiB/1001msec) 00:09:35.389 slat (nsec): min=6329, max=67866, avg=27627.13, stdev=10522.34 00:09:35.389 clat (usec): min=299, max=41022, avg=1304.33, stdev=5986.43 00:09:35.389 lat (usec): min=307, max=41040, avg=1331.96, stdev=5984.83 00:09:35.389 clat percentiles (usec): 00:09:35.389 | 1.00th=[ 310], 5.00th=[ 326], 10.00th=[ 334], 20.00th=[ 351], 00:09:35.389 | 30.00th=[ 383], 40.00th=[ 392], 50.00th=[ 404], 60.00th=[ 424], 00:09:35.389 | 70.00th=[ 433], 80.00th=[ 441], 90.00th=[ 469], 95.00th=[ 494], 00:09:35.389 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:35.389 | 99.99th=[41157] 00:09:35.389 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:35.389 slat (nsec): min=6782, max=49611, avg=15517.28, stdev=6499.80 00:09:35.389 clat (usec): min=184, max=576, avg=251.62, stdev=53.94 00:09:35.389 lat (usec): min=193, max=586, avg=267.13, stdev=53.36 00:09:35.389 clat percentiles (usec): 00:09:35.389 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 212], 00:09:35.389 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 241], 00:09:35.389 | 70.00th=[ 262], 80.00th=[ 289], 90.00th=[ 338], 95.00th=[ 371], 00:09:35.389 | 99.00th=[ 408], 99.50th=[ 429], 99.90th=[ 502], 99.95th=[ 578], 00:09:35.389 | 99.99th=[ 578] 00:09:35.389 bw ( KiB/s): min= 4096, max= 4096, per=19.66%, avg=4096.00, stdev= 0.00, samples=1 00:09:35.389 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:35.389 lat (usec) : 250=42.55%, 500=55.79%, 750=0.90% 00:09:35.389 lat (msec) : 50=0.77% 00:09:35.389 cpu : usr=1.60%, sys=3.20%, ctx=1565, majf=0, minf=1 00:09:35.389 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:35.389 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.389 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:35.389 issued rwts: total=539,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:35.389 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:35.389 00:09:35.389 Run status group 0 (all jobs): 00:09:35.389 READ: bw=13.9MiB/s (14.6MB/s), 90.0KiB/s-6138KiB/s (92.2kB/s-6285kB/s), io=14.2MiB (14.9MB), run=1001-1022msec 00:09:35.389 WRITE: bw=20.3MiB/s (21.3MB/s), 2004KiB/s-7816KiB/s (2052kB/s-8004kB/s), io=20.8MiB (21.8MB), run=1001-1022msec 00:09:35.389 00:09:35.389 Disk stats (read/write): 00:09:35.389 nvme0n1: ios=1347/1536, merge=0/0, ticks=490/311, in_queue=801, util=87.27% 00:09:35.389 nvme0n2: ios=1406/1536, merge=0/0, ticks=487/274, in_queue=761, util=86.79% 00:09:35.389 nvme0n3: ios=66/512, merge=0/0, ticks=838/138, in_queue=976, util=99.69% 00:09:35.389 nvme0n4: ios=539/512, merge=0/0, ticks=873/138, in_queue=1011, util=98.11% 00:09:35.389 23:47:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:35.389 [global] 00:09:35.389 thread=1 00:09:35.389 invalidate=1 00:09:35.389 rw=write 00:09:35.389 time_based=1 00:09:35.389 runtime=1 00:09:35.389 ioengine=libaio 00:09:35.389 direct=1 00:09:35.389 bs=4096 00:09:35.389 iodepth=128 00:09:35.389 norandommap=0 00:09:35.389 numjobs=1 00:09:35.389 00:09:35.389 verify_dump=1 00:09:35.389 verify_backlog=512 00:09:35.389 verify_state_save=0 00:09:35.389 do_verify=1 00:09:35.389 verify=crc32c-intel 00:09:35.389 [job0] 00:09:35.389 filename=/dev/nvme0n1 00:09:35.390 [job1] 00:09:35.390 filename=/dev/nvme0n2 00:09:35.390 [job2] 00:09:35.390 filename=/dev/nvme0n3 00:09:35.390 [job3] 00:09:35.390 filename=/dev/nvme0n4 00:09:35.390 Could not set queue depth (nvme0n1) 00:09:35.390 Could not set queue depth (nvme0n2) 00:09:35.390 Could not set queue depth (nvme0n3) 00:09:35.390 Could not set queue depth (nvme0n4) 00:09:35.390 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.390 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.390 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.390 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:35.390 fio-3.35 00:09:35.390 Starting 4 threads 00:09:36.764 00:09:36.764 job0: (groupid=0, jobs=1): err= 0: pid=3311215: Wed Jul 24 23:47:07 2024 00:09:36.764 read: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec) 00:09:36.764 slat (usec): min=3, max=33855, avg=375.62, stdev=2275.49 00:09:36.764 clat (msec): min=20, max=105, avg=47.33, stdev=19.78 00:09:36.764 lat (msec): min=24, max=105, avg=47.71, stdev=19.80 00:09:36.764 clat percentiles (msec): 00:09:36.764 | 1.00th=[ 26], 5.00th=[ 28], 10.00th=[ 28], 20.00th=[ 29], 00:09:36.764 | 30.00th=[ 34], 40.00th=[ 39], 50.00th=[ 43], 60.00th=[ 50], 00:09:36.764 | 70.00th=[ 55], 80.00th=[ 62], 90.00th=[ 75], 95.00th=[ 96], 00:09:36.764 | 99.00th=[ 106], 99.50th=[ 106], 99.90th=[ 106], 99.95th=[ 106], 00:09:36.764 | 99.99th=[ 106] 00:09:36.764 write: IOPS=1847, BW=7391KiB/s (7568kB/s)(7428KiB/1005msec); 0 zone resets 00:09:36.764 slat (usec): min=4, max=19957, avg=220.17, stdev=1392.45 00:09:36.764 clat (usec): min=1464, max=67320, avg=29262.52, stdev=14605.48 00:09:36.764 lat (usec): min=4264, max=67338, avg=29482.68, stdev=14635.74 00:09:36.764 clat percentiles (usec): 00:09:36.764 | 1.00th=[ 4490], 5.00th=[12649], 10.00th=[16057], 20.00th=[17171], 00:09:36.764 | 30.00th=[18482], 40.00th=[21365], 50.00th=[25822], 60.00th=[28705], 00:09:36.764 | 70.00th=[33162], 80.00th=[40109], 90.00th=[55313], 95.00th=[59507], 00:09:36.764 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:09:36.764 | 99.99th=[67634] 00:09:36.764 bw ( KiB/s): min= 5640, max= 8192, per=12.15%, avg=6916.00, stdev=1804.54, samples=2 00:09:36.764 iops : min= 1410, max= 2048, avg=1729.00, stdev=451.13, samples=2 00:09:36.764 lat (msec) : 2=0.03%, 10=1.92%, 20=17.15%, 50=54.91%, 100=24.17% 00:09:36.764 lat (msec) : 250=1.83% 00:09:36.764 cpu : usr=2.19%, sys=3.19%, ctx=117, majf=0, minf=1 00:09:36.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:09:36.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.764 issued rwts: total=1536,1857,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.764 job1: (groupid=0, jobs=1): err= 0: pid=3311217: Wed Jul 24 23:47:07 2024 00:09:36.764 read: IOPS=3616, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1003msec) 00:09:36.764 slat (usec): min=3, max=10772, avg=110.14, stdev=622.55 00:09:36.764 clat (usec): min=748, max=69729, avg=13190.22, stdev=7284.48 00:09:36.764 lat (usec): min=3545, max=69750, avg=13300.36, stdev=7354.06 00:09:36.764 clat percentiles (usec): 00:09:36.764 | 1.00th=[ 4047], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10290], 00:09:36.764 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11600], 60.00th=[11994], 00:09:36.764 | 70.00th=[12256], 80.00th=[13698], 90.00th=[14746], 95.00th=[22414], 00:09:36.764 | 99.00th=[59507], 99.50th=[65274], 99.90th=[69731], 99.95th=[69731], 00:09:36.764 | 99.99th=[69731] 00:09:36.764 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:09:36.764 slat (usec): min=4, max=14016, avg=134.65, stdev=753.11 00:09:36.764 clat (usec): min=604, max=90919, avg=18916.83, stdev=16149.50 00:09:36.764 lat (usec): min=617, max=90942, avg=19051.49, stdev=16254.64 00:09:36.764 clat percentiles (usec): 00:09:36.764 | 1.00th=[ 7308], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[10552], 00:09:36.764 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:09:36.764 | 70.00th=[12780], 80.00th=[27395], 90.00th=[36439], 95.00th=[56886], 00:09:36.764 | 99.00th=[86508], 99.50th=[87557], 99.90th=[90702], 99.95th=[90702], 00:09:36.764 | 99.99th=[90702] 00:09:36.764 bw ( KiB/s): min=12080, max=20008, per=28.19%, avg=16044.00, stdev=5605.94, samples=2 00:09:36.764 iops : min= 3020, max= 5002, avg=4011.00, stdev=1401.49, samples=2 00:09:36.764 lat (usec) : 750=0.05% 00:09:36.764 lat (msec) : 2=0.06%, 4=0.52%, 10=9.47%, 20=74.08%, 50=11.24% 00:09:36.764 lat (msec) : 100=4.58% 00:09:36.764 cpu : usr=5.19%, sys=9.08%, ctx=377, majf=0, minf=1 00:09:36.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:36.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.764 issued rwts: total=3627,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.764 job2: (groupid=0, jobs=1): err= 0: pid=3311218: Wed Jul 24 23:47:07 2024 00:09:36.764 read: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec) 00:09:36.764 slat (usec): min=2, max=16347, avg=110.75, stdev=770.41 00:09:36.764 clat (usec): min=1761, max=46010, avg=15208.67, stdev=5005.64 00:09:36.764 lat (usec): min=1771, max=46015, avg=15319.42, stdev=5053.66 00:09:36.764 clat percentiles (usec): 00:09:36.764 | 1.00th=[ 5276], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10945], 00:09:36.764 | 30.00th=[11863], 40.00th=[12780], 50.00th=[13829], 60.00th=[16319], 00:09:36.764 | 70.00th=[17433], 80.00th=[19006], 90.00th=[21890], 95.00th=[24773], 00:09:36.764 | 99.00th=[30540], 99.50th=[31589], 99.90th=[31589], 99.95th=[39584], 00:09:36.764 | 99.99th=[45876] 00:09:36.764 write: IOPS=4757, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1007msec); 0 zone resets 00:09:36.764 slat (usec): min=3, max=9921, avg=83.46, stdev=563.60 00:09:36.764 clat (usec): min=721, max=35895, avg=11926.40, stdev=5401.85 00:09:36.764 lat (usec): min=738, max=35904, avg=12009.86, stdev=5450.74 00:09:36.764 clat percentiles (usec): 00:09:36.764 | 1.00th=[ 1156], 5.00th=[ 3458], 10.00th=[ 4883], 20.00th=[ 7439], 00:09:36.764 | 30.00th=[10159], 40.00th=[11076], 50.00th=[11338], 60.00th=[11863], 00:09:36.764 | 70.00th=[14484], 80.00th=[16909], 90.00th=[18482], 95.00th=[19006], 00:09:36.764 | 99.00th=[29492], 99.50th=[31327], 99.90th=[34866], 99.95th=[35914], 00:09:36.764 | 99.99th=[35914] 00:09:36.764 bw ( KiB/s): min=13872, max=23440, per=32.78%, avg=18656.00, stdev=6765.60, samples=2 00:09:36.764 iops : min= 3468, max= 5860, avg=4664.00, stdev=1691.40, samples=2 00:09:36.764 lat (usec) : 750=0.02%, 1000=0.11% 00:09:36.764 lat (msec) : 2=1.59%, 4=2.20%, 10=15.68%, 20=70.84%, 50=9.56% 00:09:36.764 cpu : usr=4.17%, sys=8.55%, ctx=418, majf=0, minf=1 00:09:36.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:36.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.764 issued rwts: total=4608,4791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.764 job3: (groupid=0, jobs=1): err= 0: pid=3311219: Wed Jul 24 23:47:07 2024 00:09:36.764 read: IOPS=3261, BW=12.7MiB/s (13.4MB/s)(12.8MiB/1002msec) 00:09:36.764 slat (usec): min=2, max=17302, avg=143.59, stdev=947.67 00:09:36.764 clat (usec): min=585, max=52601, avg=17976.45, stdev=7930.51 00:09:36.764 lat (usec): min=1930, max=52623, avg=18120.05, stdev=7989.00 00:09:36.764 clat percentiles (usec): 00:09:36.764 | 1.00th=[ 4178], 5.00th=[ 9896], 10.00th=[11076], 20.00th=[12256], 00:09:36.764 | 30.00th=[12780], 40.00th=[13304], 50.00th=[15926], 60.00th=[16909], 00:09:36.764 | 70.00th=[19792], 80.00th=[23200], 90.00th=[29754], 95.00th=[35914], 00:09:36.764 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[45876], 00:09:36.764 | 99.99th=[52691] 00:09:36.764 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:09:36.764 slat (usec): min=3, max=11660, avg=139.81, stdev=751.83 00:09:36.764 clat (msec): min=8, max=100, avg=18.85, stdev=15.28 00:09:36.764 lat (msec): min=8, max=100, avg=18.99, stdev=15.37 00:09:36.764 clat percentiles (msec): 00:09:36.764 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 12], 00:09:36.764 | 30.00th=[ 13], 40.00th=[ 13], 50.00th=[ 14], 60.00th=[ 15], 00:09:36.764 | 70.00th=[ 16], 80.00th=[ 24], 90.00th=[ 31], 95.00th=[ 46], 00:09:36.764 | 99.00th=[ 97], 99.50th=[ 100], 99.90th=[ 101], 99.95th=[ 101], 00:09:36.764 | 99.99th=[ 101] 00:09:36.764 bw ( KiB/s): min=12288, max=16384, per=25.19%, avg=14336.00, stdev=2896.31, samples=2 00:09:36.764 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:09:36.764 lat (usec) : 750=0.01% 00:09:36.764 lat (msec) : 2=0.07%, 4=0.23%, 10=4.17%, 20=69.53%, 50=23.54% 00:09:36.764 lat (msec) : 100=2.34%, 250=0.10% 00:09:36.764 cpu : usr=4.00%, sys=6.19%, ctx=322, majf=0, minf=1 00:09:36.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:36.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.764 issued rwts: total=3268,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.764 00:09:36.764 Run status group 0 (all jobs): 00:09:36.764 READ: bw=50.6MiB/s (53.0MB/s), 6113KiB/s-17.9MiB/s (6260kB/s-18.7MB/s), io=50.9MiB (53.4MB), run=1002-1007msec 00:09:36.764 WRITE: bw=55.6MiB/s (58.3MB/s), 7391KiB/s-18.6MiB/s (7568kB/s-19.5MB/s), io=56.0MiB (58.7MB), run=1002-1007msec 00:09:36.764 00:09:36.764 Disk stats (read/write): 00:09:36.764 nvme0n1: ios=1266/1536, merge=0/0, ticks=14992/11361, in_queue=26353, util=86.37% 00:09:36.764 nvme0n2: ios=2864/3072, merge=0/0, ticks=28679/53551, in_queue=82230, util=97.36% 00:09:36.764 nvme0n3: ios=4144/4234, merge=0/0, ticks=36237/32618, in_queue=68855, util=98.85% 00:09:36.764 nvme0n4: ios=2612/3072, merge=0/0, ticks=18542/19129, in_queue=37671, util=98.94% 00:09:36.764 23:47:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:36.764 [global] 00:09:36.764 thread=1 00:09:36.764 invalidate=1 00:09:36.764 rw=randwrite 00:09:36.764 time_based=1 00:09:36.764 runtime=1 00:09:36.764 ioengine=libaio 00:09:36.764 direct=1 00:09:36.764 bs=4096 00:09:36.764 iodepth=128 00:09:36.764 norandommap=0 00:09:36.764 numjobs=1 00:09:36.764 00:09:36.764 verify_dump=1 00:09:36.764 verify_backlog=512 00:09:36.764 verify_state_save=0 00:09:36.764 do_verify=1 00:09:36.764 verify=crc32c-intel 00:09:36.764 [job0] 00:09:36.764 filename=/dev/nvme0n1 00:09:36.764 [job1] 00:09:36.764 filename=/dev/nvme0n2 00:09:36.764 [job2] 00:09:36.764 filename=/dev/nvme0n3 00:09:36.764 [job3] 00:09:36.764 filename=/dev/nvme0n4 00:09:36.764 Could not set queue depth (nvme0n1) 00:09:36.764 Could not set queue depth (nvme0n2) 00:09:36.764 Could not set queue depth (nvme0n3) 00:09:36.764 Could not set queue depth (nvme0n4) 00:09:36.764 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.764 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.764 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.764 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:36.764 fio-3.35 00:09:36.765 Starting 4 threads 00:09:38.171 00:09:38.171 job0: (groupid=0, jobs=1): err= 0: pid=3311445: Wed Jul 24 23:47:08 2024 00:09:38.171 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:09:38.171 slat (usec): min=2, max=7108, avg=94.49, stdev=478.87 00:09:38.171 clat (usec): min=7800, max=24473, avg=12202.78, stdev=1766.52 00:09:38.171 lat (usec): min=7806, max=24491, avg=12297.26, stdev=1788.14 00:09:38.171 clat percentiles (usec): 00:09:38.171 | 1.00th=[ 8979], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:09:38.171 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:09:38.171 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14353], 95.00th=[15139], 00:09:38.171 | 99.00th=[18482], 99.50th=[22152], 99.90th=[22152], 99.95th=[22152], 00:09:38.171 | 99.99th=[24511] 00:09:38.171 write: IOPS=5370, BW=21.0MiB/s (22.0MB/s)(21.0MiB/1002msec); 0 zone resets 00:09:38.171 slat (usec): min=4, max=6398, avg=86.95, stdev=423.28 00:09:38.171 clat (usec): min=405, max=24076, avg=11912.80, stdev=1952.89 00:09:38.171 lat (usec): min=2913, max=24095, avg=11999.75, stdev=1962.66 00:09:38.171 clat percentiles (usec): 00:09:38.171 | 1.00th=[ 6390], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[10814], 00:09:38.171 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11731], 60.00th=[12125], 00:09:38.171 | 70.00th=[12518], 80.00th=[12911], 90.00th=[13698], 95.00th=[14353], 00:09:38.171 | 99.00th=[20317], 99.50th=[20317], 99.90th=[20841], 99.95th=[23462], 00:09:38.171 | 99.99th=[23987] 00:09:38.171 bw ( KiB/s): min=19576, max=22456, per=30.20%, avg=21016.00, stdev=2036.47, samples=2 00:09:38.171 iops : min= 4894, max= 5614, avg=5254.00, stdev=509.12, samples=2 00:09:38.171 lat (usec) : 500=0.01% 00:09:38.171 lat (msec) : 4=0.30%, 10=5.04%, 20=93.81%, 50=0.84% 00:09:38.171 cpu : usr=6.29%, sys=9.09%, ctx=539, majf=0, minf=1 00:09:38.171 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:38.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.171 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.172 issued rwts: total=5120,5381,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.172 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.172 job1: (groupid=0, jobs=1): err= 0: pid=3311446: Wed Jul 24 23:47:08 2024 00:09:38.172 read: IOPS=4144, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1003msec) 00:09:38.172 slat (usec): min=2, max=13682, avg=106.85, stdev=690.40 00:09:38.172 clat (usec): min=2059, max=37319, avg=13664.00, stdev=4544.97 00:09:38.172 lat (usec): min=2070, max=37323, avg=13770.85, stdev=4579.34 00:09:38.172 clat percentiles (usec): 00:09:38.172 | 1.00th=[ 3818], 5.00th=[ 8586], 10.00th=[10290], 20.00th=[11731], 00:09:38.172 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[12780], 00:09:38.172 | 70.00th=[13304], 80.00th=[14615], 90.00th=[20055], 95.00th=[22414], 00:09:38.172 | 99.00th=[31065], 99.50th=[34341], 99.90th=[37487], 99.95th=[37487], 00:09:38.172 | 99.99th=[37487] 00:09:38.172 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:09:38.172 slat (usec): min=3, max=9207, avg=108.18, stdev=514.31 00:09:38.172 clat (usec): min=3395, max=48909, avg=15055.46, stdev=6609.91 00:09:38.172 lat (usec): min=3402, max=48940, avg=15163.63, stdev=6638.97 00:09:38.172 clat percentiles (usec): 00:09:38.172 | 1.00th=[ 4080], 5.00th=[ 7046], 10.00th=[ 9503], 20.00th=[11600], 00:09:38.172 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[13435], 00:09:38.172 | 70.00th=[15139], 80.00th=[17433], 90.00th=[25822], 95.00th=[28967], 00:09:38.172 | 99.00th=[35914], 99.50th=[36963], 99.90th=[49021], 99.95th=[49021], 00:09:38.172 | 99.99th=[49021] 00:09:38.172 bw ( KiB/s): min=15960, max=20376, per=26.11%, avg=18168.00, stdev=3122.58, samples=2 00:09:38.172 iops : min= 3990, max= 5094, avg=4542.00, stdev=780.65, samples=2 00:09:38.172 lat (msec) : 4=1.00%, 10=8.48%, 20=75.79%, 50=14.73% 00:09:38.172 cpu : usr=4.99%, sys=6.49%, ctx=487, majf=0, minf=1 00:09:38.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:38.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.172 issued rwts: total=4157,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.172 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.172 job2: (groupid=0, jobs=1): err= 0: pid=3311447: Wed Jul 24 23:47:08 2024 00:09:38.172 read: IOPS=4281, BW=16.7MiB/s (17.5MB/s)(17.5MiB/1045msec) 00:09:38.172 slat (usec): min=2, max=11847, avg=110.73, stdev=648.81 00:09:38.172 clat (usec): min=7186, max=54603, avg=14710.77, stdev=6762.69 00:09:38.172 lat (usec): min=8217, max=60508, avg=14821.50, stdev=6783.71 00:09:38.172 clat percentiles (usec): 00:09:38.172 | 1.00th=[ 8848], 5.00th=[10290], 10.00th=[11338], 20.00th=[12387], 00:09:38.172 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13435], 00:09:38.172 | 70.00th=[13960], 80.00th=[14615], 90.00th=[17171], 95.00th=[22676], 00:09:38.172 | 99.00th=[54264], 99.50th=[54264], 99.90th=[54264], 99.95th=[54789], 00:09:38.172 | 99.99th=[54789] 00:09:38.172 write: IOPS=4409, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1045msec); 0 zone resets 00:09:38.172 slat (usec): min=4, max=10451, avg=100.99, stdev=447.63 00:09:38.172 clat (usec): min=6526, max=50601, avg=14411.80, stdev=6223.95 00:09:38.172 lat (usec): min=6554, max=50622, avg=14512.79, stdev=6250.25 00:09:38.172 clat percentiles (usec): 00:09:38.172 | 1.00th=[ 7832], 5.00th=[10028], 10.00th=[11469], 20.00th=[12256], 00:09:38.172 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:09:38.172 | 70.00th=[13829], 80.00th=[14222], 90.00th=[15926], 95.00th=[19792], 00:09:38.172 | 99.00th=[50594], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:09:38.172 | 99.99th=[50594] 00:09:38.172 bw ( KiB/s): min=16384, max=20521, per=26.51%, avg=18452.50, stdev=2925.30, samples=2 00:09:38.172 iops : min= 4096, max= 5130, avg=4613.00, stdev=731.15, samples=2 00:09:38.172 lat (msec) : 10=4.35%, 20=90.42%, 50=3.77%, 100=1.46% 00:09:38.172 cpu : usr=4.31%, sys=8.72%, ctx=564, majf=0, minf=1 00:09:38.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:38.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.172 issued rwts: total=4474,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.172 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.172 job3: (groupid=0, jobs=1): err= 0: pid=3311448: Wed Jul 24 23:47:08 2024 00:09:38.172 read: IOPS=3500, BW=13.7MiB/s (14.3MB/s)(13.8MiB/1007msec) 00:09:38.172 slat (usec): min=3, max=21877, avg=147.11, stdev=996.97 00:09:38.172 clat (usec): min=4668, max=60972, avg=18502.34, stdev=9959.88 00:09:38.172 lat (usec): min=4677, max=60985, avg=18649.46, stdev=10033.80 00:09:38.172 clat percentiles (usec): 00:09:38.172 | 1.00th=[ 4817], 5.00th=[ 8848], 10.00th=[11469], 20.00th=[12911], 00:09:38.172 | 30.00th=[13566], 40.00th=[13698], 50.00th=[14091], 60.00th=[17171], 00:09:38.172 | 70.00th=[19268], 80.00th=[21627], 90.00th=[28443], 95.00th=[44303], 00:09:38.172 | 99.00th=[54789], 99.50th=[54789], 99.90th=[57410], 99.95th=[60556], 00:09:38.172 | 99.99th=[61080] 00:09:38.172 write: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec); 0 zone resets 00:09:38.172 slat (usec): min=4, max=10645, avg=124.26, stdev=595.85 00:09:38.172 clat (usec): min=4205, max=61150, avg=17293.03, stdev=9702.54 00:09:38.172 lat (usec): min=4222, max=61171, avg=17417.29, stdev=9763.26 00:09:38.172 clat percentiles (usec): 00:09:38.172 | 1.00th=[ 5276], 5.00th=[ 8586], 10.00th=[11338], 20.00th=[12649], 00:09:38.172 | 30.00th=[13304], 40.00th=[13829], 50.00th=[14091], 60.00th=[14615], 00:09:38.172 | 70.00th=[15270], 80.00th=[22152], 90.00th=[26346], 95.00th=[35390], 00:09:38.172 | 99.00th=[61080], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:09:38.172 | 99.99th=[61080] 00:09:38.172 bw ( KiB/s): min=10512, max=18160, per=20.60%, avg=14336.00, stdev=5407.95, samples=2 00:09:38.172 iops : min= 2628, max= 4540, avg=3584.00, stdev=1351.99, samples=2 00:09:38.172 lat (msec) : 10=7.68%, 20=69.01%, 50=20.50%, 100=2.81% 00:09:38.172 cpu : usr=3.78%, sys=7.95%, ctx=438, majf=0, minf=1 00:09:38.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:38.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:38.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:38.172 issued rwts: total=3525,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:38.172 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:38.172 00:09:38.172 Run status group 0 (all jobs): 00:09:38.172 READ: bw=64.6MiB/s (67.7MB/s), 13.7MiB/s-20.0MiB/s (14.3MB/s-20.9MB/s), io=67.5MiB (70.8MB), run=1002-1045msec 00:09:38.172 WRITE: bw=68.0MiB/s (71.3MB/s), 13.9MiB/s-21.0MiB/s (14.6MB/s-22.0MB/s), io=71.0MiB (74.5MB), run=1002-1045msec 00:09:38.172 00:09:38.172 Disk stats (read/write): 00:09:38.172 nvme0n1: ios=4398/4608, merge=0/0, ticks=17259/16055, in_queue=33314, util=97.90% 00:09:38.172 nvme0n2: ios=3599/3584, merge=0/0, ticks=32202/35703, in_queue=67905, util=91.36% 00:09:38.172 nvme0n3: ios=3636/4055, merge=0/0, ticks=24982/27817, in_queue=52799, util=97.81% 00:09:38.172 nvme0n4: ios=2560/3071, merge=0/0, ticks=29986/30420, in_queue=60406, util=89.68% 00:09:38.172 23:47:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:38.172 23:47:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3311592 00:09:38.172 23:47:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:38.172 23:47:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:38.172 [global] 00:09:38.172 thread=1 00:09:38.172 invalidate=1 00:09:38.172 rw=read 00:09:38.172 time_based=1 00:09:38.172 runtime=10 00:09:38.172 ioengine=libaio 00:09:38.172 direct=1 00:09:38.172 bs=4096 00:09:38.172 iodepth=1 00:09:38.172 norandommap=1 00:09:38.172 numjobs=1 00:09:38.172 00:09:38.172 [job0] 00:09:38.172 filename=/dev/nvme0n1 00:09:38.172 [job1] 00:09:38.172 filename=/dev/nvme0n2 00:09:38.172 [job2] 00:09:38.172 filename=/dev/nvme0n3 00:09:38.172 [job3] 00:09:38.172 filename=/dev/nvme0n4 00:09:38.172 Could not set queue depth (nvme0n1) 00:09:38.172 Could not set queue depth (nvme0n2) 00:09:38.172 Could not set queue depth (nvme0n3) 00:09:38.173 Could not set queue depth (nvme0n4) 00:09:38.430 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.430 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.430 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.430 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:38.430 fio-3.35 00:09:38.430 Starting 4 threads 00:09:41.707 23:47:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:41.707 23:47:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:41.707 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=10162176, buflen=4096 00:09:41.707 fio: pid=3311687, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:41.707 23:47:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:41.707 23:47:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:41.707 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=688128, buflen=4096 00:09:41.707 fio: pid=3311686, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:41.964 23:47:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:41.964 23:47:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:41.964 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=503808, buflen=4096 00:09:41.964 fio: pid=3311684, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:42.222 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=6385664, buflen=4096 00:09:42.222 fio: pid=3311685, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:42.222 23:47:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.222 23:47:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:42.222 00:09:42.222 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3311684: Wed Jul 24 23:47:12 2024 00:09:42.222 read: IOPS=36, BW=145KiB/s (148kB/s)(492KiB/3395msec) 00:09:42.222 slat (usec): min=7, max=9813, avg=155.96, stdev=1062.81 00:09:42.222 clat (usec): min=241, max=49123, avg=27254.19, stdev=19420.31 00:09:42.222 lat (usec): min=249, max=50948, avg=27356.53, stdev=19498.27 00:09:42.222 clat percentiles (usec): 00:09:42.222 | 1.00th=[ 249], 5.00th=[ 262], 10.00th=[ 289], 20.00th=[ 453], 00:09:42.222 | 30.00th=[ 529], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:42.222 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:42.222 | 99.00th=[44827], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:09:42.222 | 99.99th=[49021] 00:09:42.222 bw ( KiB/s): min= 96, max= 264, per=3.21%, avg=152.00, stdev=69.00, samples=6 00:09:42.222 iops : min= 24, max= 66, avg=38.00, stdev=17.25, samples=6 00:09:42.222 lat (usec) : 250=1.61%, 500=23.39%, 750=8.06% 00:09:42.222 lat (msec) : 2=0.81%, 50=65.32% 00:09:42.222 cpu : usr=0.18%, sys=0.00%, ctx=126, majf=0, minf=1 00:09:42.222 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.222 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.222 issued rwts: total=124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.222 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.222 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3311685: Wed Jul 24 23:47:12 2024 00:09:42.222 read: IOPS=426, BW=1707KiB/s (1748kB/s)(6236KiB/3654msec) 00:09:42.222 slat (usec): min=5, max=27081, avg=49.81, stdev=835.48 00:09:42.222 clat (usec): min=233, max=43596, avg=2276.86, stdev=8779.84 00:09:42.222 lat (usec): min=239, max=51024, avg=2326.69, stdev=8843.79 00:09:42.222 clat percentiles (usec): 00:09:42.222 | 1.00th=[ 241], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 277], 00:09:42.222 | 30.00th=[ 281], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 289], 00:09:42.222 | 70.00th=[ 297], 80.00th=[ 302], 90.00th=[ 322], 95.00th=[ 611], 00:09:42.222 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[43779], 00:09:42.222 | 99.99th=[43779] 00:09:42.222 bw ( KiB/s): min= 96, max= 9163, per=29.78%, avg=1413.00, stdev=3417.53, samples=7 00:09:42.222 iops : min= 24, max= 2290, avg=353.14, stdev=854.10, samples=7 00:09:42.222 lat (usec) : 250=2.76%, 500=91.73%, 750=0.58% 00:09:42.222 lat (msec) : 50=4.87% 00:09:42.222 cpu : usr=0.22%, sys=0.55%, ctx=1566, majf=0, minf=1 00:09:42.222 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.222 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.222 issued rwts: total=1560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.222 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.222 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3311686: Wed Jul 24 23:47:12 2024 00:09:42.222 read: IOPS=53, BW=215KiB/s (220kB/s)(672KiB/3132msec) 00:09:42.222 slat (usec): min=6, max=5923, avg=50.75, stdev=454.56 00:09:42.222 clat (usec): min=246, max=42454, avg=18453.84, stdev=20536.03 00:09:42.222 lat (usec): min=253, max=47005, avg=18504.69, stdev=20587.47 00:09:42.222 clat percentiles (usec): 00:09:42.222 | 1.00th=[ 247], 5.00th=[ 251], 10.00th=[ 255], 20.00th=[ 260], 00:09:42.222 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 306], 60.00th=[41157], 00:09:42.222 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:09:42.222 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:42.222 | 99.99th=[42206] 00:09:42.222 bw ( KiB/s): min= 88, max= 808, per=4.64%, avg=220.00, stdev=288.57, samples=6 00:09:42.222 iops : min= 22, max= 202, avg=55.00, stdev=72.14, samples=6 00:09:42.222 lat (usec) : 250=4.73%, 500=49.70%, 1000=0.59% 00:09:42.222 lat (msec) : 2=0.59%, 50=43.79% 00:09:42.222 cpu : usr=0.10%, sys=0.03%, ctx=172, majf=0, minf=1 00:09:42.222 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.222 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.222 issued rwts: total=169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.222 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.222 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=3311687: Wed Jul 24 23:47:12 2024 00:09:42.222 read: IOPS=857, BW=3429KiB/s (3511kB/s)(9924KiB/2894msec) 00:09:42.222 slat (nsec): min=5519, max=51987, avg=15427.03, stdev=4241.06 00:09:42.222 clat (usec): min=239, max=41098, avg=1137.99, stdev=5771.34 00:09:42.222 lat (usec): min=246, max=41105, avg=1153.41, stdev=5772.35 00:09:42.222 clat percentiles (usec): 00:09:42.222 | 1.00th=[ 251], 5.00th=[ 265], 10.00th=[ 277], 20.00th=[ 285], 00:09:42.222 | 30.00th=[ 293], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 302], 00:09:42.222 | 70.00th=[ 306], 80.00th=[ 310], 90.00th=[ 318], 95.00th=[ 424], 00:09:42.222 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:42.222 | 99.99th=[41157] 00:09:42.222 bw ( KiB/s): min= 96, max=12616, per=83.42%, avg=3955.20, stdev=5157.18, samples=5 00:09:42.222 iops : min= 24, max= 3154, avg=988.80, stdev=1289.29, samples=5 00:09:42.222 lat (usec) : 250=0.64%, 500=95.73%, 750=1.53% 00:09:42.222 lat (msec) : 50=2.05% 00:09:42.222 cpu : usr=1.04%, sys=1.87%, ctx=2482, majf=0, minf=1 00:09:42.222 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:42.222 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.222 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.222 issued rwts: total=2482,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.222 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:42.222 00:09:42.222 Run status group 0 (all jobs): 00:09:42.222 READ: bw=4741KiB/s (4855kB/s), 145KiB/s-3429KiB/s (148kB/s-3511kB/s), io=16.9MiB (17.7MB), run=2894-3654msec 00:09:42.222 00:09:42.222 Disk stats (read/write): 00:09:42.222 nvme0n1: ios=122/0, merge=0/0, ticks=3307/0, in_queue=3307, util=95.68% 00:09:42.222 nvme0n2: ios=1476/0, merge=0/0, ticks=3725/0, in_queue=3725, util=97.48% 00:09:42.222 nvme0n3: ios=189/0, merge=0/0, ticks=3199/0, in_queue=3199, util=98.88% 00:09:42.222 nvme0n4: ios=2480/0, merge=0/0, ticks=2747/0, in_queue=2747, util=96.75% 00:09:42.480 23:47:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.480 23:47:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:42.737 23:47:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.737 23:47:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:42.994 23:47:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:42.994 23:47:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:43.252 23:47:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:43.252 23:47:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:43.509 23:47:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:43.509 23:47:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3311592 00:09:43.509 23:47:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:43.509 23:47:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:43.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.509 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:43.509 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1217 -- # local i=0 00:09:43.509 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:09:43.509 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.509 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:09:43.509 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:43.509 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # return 0 00:09:43.509 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:43.509 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:43.509 nvmf hotplug test: fio failed as expected 00:09:43.509 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:43.767 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:43.767 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:43.767 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:43.767 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:43.767 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:43.767 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:43.767 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:43.767 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:43.767 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:43.767 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:43.767 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:44.024 rmmod nvme_tcp 00:09:44.024 rmmod nvme_fabrics 00:09:44.024 rmmod nvme_keyring 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 3309560 ']' 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 3309560 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 3309560 ']' 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 3309560 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3309560 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3309560' 00:09:44.024 killing process with pid 3309560 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 3309560 00:09:44.024 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 3309560 00:09:44.282 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:44.282 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:44.282 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:44.282 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.282 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:44.282 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.282 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.282 23:47:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.182 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:46.182 00:09:46.182 real 0m23.460s 00:09:46.182 user 1m22.402s 00:09:46.182 sys 0m5.978s 00:09:46.182 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:46.182 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.182 ************************************ 00:09:46.182 END TEST nvmf_fio_target 00:09:46.182 ************************************ 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:46.441 ************************************ 00:09:46.441 START TEST nvmf_bdevio 00:09:46.441 ************************************ 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:46.441 * Looking for test storage... 00:09:46.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:46.441 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:09:46.442 23:47:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:48.340 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:48.340 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:48.340 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:48.341 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:48.341 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:48.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:09:48.341 00:09:48.341 --- 10.0.0.2 ping statistics --- 00:09:48.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.341 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:48.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:09:48.341 00:09:48.341 --- 10.0.0.1 ping statistics --- 00:09:48.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.341 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=3314308 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 3314308 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 3314308 ']' 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:48.341 23:47:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:48.600 [2024-07-24 23:47:18.994985] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:09:48.600 [2024-07-24 23:47:18.995073] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.600 EAL: No free 2048 kB hugepages reported on node 1 00:09:48.600 [2024-07-24 23:47:19.066419] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:48.600 [2024-07-24 23:47:19.189414] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.600 [2024-07-24 23:47:19.189480] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.600 [2024-07-24 23:47:19.189497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.600 [2024-07-24 23:47:19.189510] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.600 [2024-07-24 23:47:19.189522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.600 [2024-07-24 23:47:19.189647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:48.600 [2024-07-24 23:47:19.189731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:48.600 [2024-07-24 23:47:19.189784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:48.600 [2024-07-24 23:47:19.189787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:49.531 [2024-07-24 23:47:19.948408] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:49.531 Malloc0 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:49.531 23:47:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:49.531 [2024-07-24 23:47:20.001984] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.531 23:47:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:49.531 23:47:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:49.531 23:47:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:49.531 23:47:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:49.531 23:47:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:49.531 23:47:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:49.531 23:47:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:49.531 { 00:09:49.531 "params": { 00:09:49.531 "name": "Nvme$subsystem", 00:09:49.531 "trtype": "$TEST_TRANSPORT", 00:09:49.531 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:49.531 "adrfam": "ipv4", 00:09:49.531 "trsvcid": "$NVMF_PORT", 00:09:49.531 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:49.531 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:49.531 "hdgst": ${hdgst:-false}, 00:09:49.531 "ddgst": ${ddgst:-false} 00:09:49.531 }, 00:09:49.531 "method": "bdev_nvme_attach_controller" 00:09:49.531 } 00:09:49.531 EOF 00:09:49.531 )") 00:09:49.531 23:47:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:49.531 23:47:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:49.531 23:47:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:49.531 23:47:20 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:49.531 "params": { 00:09:49.531 "name": "Nvme1", 00:09:49.531 "trtype": "tcp", 00:09:49.531 "traddr": "10.0.0.2", 00:09:49.531 "adrfam": "ipv4", 00:09:49.531 "trsvcid": "4420", 00:09:49.531 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:49.531 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:49.531 "hdgst": false, 00:09:49.531 "ddgst": false 00:09:49.531 }, 00:09:49.531 "method": "bdev_nvme_attach_controller" 00:09:49.531 }' 00:09:49.531 [2024-07-24 23:47:20.054146] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:09:49.531 [2024-07-24 23:47:20.054264] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3314466 ] 00:09:49.531 EAL: No free 2048 kB hugepages reported on node 1 00:09:49.531 [2024-07-24 23:47:20.122629] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:49.788 [2024-07-24 23:47:20.239084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.788 [2024-07-24 23:47:20.239133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:49.788 [2024-07-24 23:47:20.239137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.044 I/O targets: 00:09:50.044 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:50.044 00:09:50.044 00:09:50.044 CUnit - A unit testing framework for C - Version 2.1-3 00:09:50.044 http://cunit.sourceforge.net/ 00:09:50.045 00:09:50.045 00:09:50.045 Suite: bdevio tests on: Nvme1n1 00:09:50.045 Test: blockdev write read block ...passed 00:09:50.302 Test: blockdev write zeroes read block ...passed 00:09:50.302 Test: blockdev write zeroes read no split ...passed 00:09:50.302 Test: blockdev write zeroes read split ...passed 00:09:50.302 Test: blockdev write zeroes read split partial ...passed 00:09:50.302 Test: blockdev reset ...[2024-07-24 23:47:20.744773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:50.302 [2024-07-24 23:47:20.744877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x588580 (9): Bad file descriptor 00:09:50.302 [2024-07-24 23:47:20.804329] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:50.302 passed 00:09:50.302 Test: blockdev write read 8 blocks ...passed 00:09:50.302 Test: blockdev write read size > 128k ...passed 00:09:50.302 Test: blockdev write read invalid size ...passed 00:09:50.302 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:50.302 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:50.302 Test: blockdev write read max offset ...passed 00:09:50.559 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:50.559 Test: blockdev writev readv 8 blocks ...passed 00:09:50.559 Test: blockdev writev readv 30 x 1block ...passed 00:09:50.559 Test: blockdev writev readv block ...passed 00:09:50.559 Test: blockdev writev readv size > 128k ...passed 00:09:50.559 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:50.559 Test: blockdev comparev and writev ...[2024-07-24 23:47:21.062143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:50.559 [2024-07-24 23:47:21.062178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:50.559 [2024-07-24 23:47:21.062201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:50.559 [2024-07-24 23:47:21.062218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:50.559 [2024-07-24 23:47:21.062569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:50.559 [2024-07-24 23:47:21.062593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:50.559 [2024-07-24 23:47:21.062615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:50.559 [2024-07-24 23:47:21.062630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:50.559 [2024-07-24 23:47:21.062975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:50.559 [2024-07-24 23:47:21.062999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:50.559 [2024-07-24 23:47:21.063021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:50.559 [2024-07-24 23:47:21.063037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:50.559 [2024-07-24 23:47:21.063393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:50.559 [2024-07-24 23:47:21.063417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:50.560 [2024-07-24 23:47:21.063438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:50.560 [2024-07-24 23:47:21.063454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:50.560 passed 00:09:50.560 Test: blockdev nvme passthru rw ...passed 00:09:50.560 Test: blockdev nvme passthru vendor specific ...[2024-07-24 23:47:21.147531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:50.560 [2024-07-24 23:47:21.147557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:50.560 [2024-07-24 23:47:21.147744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:50.560 [2024-07-24 23:47:21.147768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:50.560 [2024-07-24 23:47:21.147937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:50.560 [2024-07-24 23:47:21.147960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:50.560 [2024-07-24 23:47:21.148135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:50.560 [2024-07-24 23:47:21.148158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:50.560 passed 00:09:50.560 Test: blockdev nvme admin passthru ...passed 00:09:50.817 Test: blockdev copy ...passed 00:09:50.817 00:09:50.817 Run Summary: Type Total Ran Passed Failed Inactive 00:09:50.817 suites 1 1 n/a 0 0 00:09:50.817 tests 23 23 23 0 0 00:09:50.817 asserts 152 152 152 0 n/a 00:09:50.817 00:09:50.817 Elapsed time = 1.248 seconds 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:51.074 rmmod nvme_tcp 00:09:51.074 rmmod nvme_fabrics 00:09:51.074 rmmod nvme_keyring 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:51.074 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 3314308 ']' 00:09:51.075 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 3314308 00:09:51.075 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 3314308 ']' 00:09:51.075 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 3314308 00:09:51.075 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:09:51.075 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:51.075 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3314308 00:09:51.075 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:09:51.075 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:09:51.075 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3314308' 00:09:51.075 killing process with pid 3314308 00:09:51.075 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 3314308 00:09:51.075 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 3314308 00:09:51.333 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:51.333 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:51.333 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:51.333 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:51.333 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:51.333 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.333 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:51.333 23:47:21 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.889 23:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:53.889 00:09:53.889 real 0m7.060s 00:09:53.889 user 0m13.987s 00:09:53.889 sys 0m2.068s 00:09:53.889 23:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:53.889 23:47:23 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:53.889 ************************************ 00:09:53.889 END TEST nvmf_bdevio 00:09:53.889 ************************************ 00:09:53.889 23:47:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:53.889 00:09:53.889 real 3m55.862s 00:09:53.889 user 10m13.574s 00:09:53.889 sys 1m6.967s 00:09:53.889 23:47:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:53.889 23:47:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:53.889 ************************************ 00:09:53.889 END TEST nvmf_target_core 00:09:53.889 ************************************ 00:09:53.889 23:47:23 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:53.889 23:47:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:53.889 23:47:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.889 23:47:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:53.889 ************************************ 00:09:53.889 START TEST nvmf_target_extra 00:09:53.889 ************************************ 00:09:53.889 23:47:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:53.889 * Looking for test storage... 00:09:53.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.889 23:47:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:53.890 ************************************ 00:09:53.890 START TEST nvmf_example 00:09:53.890 ************************************ 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:53.890 * Looking for test storage... 00:09:53.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:53.890 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:53.891 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:53.891 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:53.891 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:53.891 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:53.891 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:53.891 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:53.891 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:53.891 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:53.891 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:53.891 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:09:53.891 23:47:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:55.789 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:55.789 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:55.789 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:55.789 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:55.789 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:55.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:55.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:09:55.789 00:09:55.789 --- 10.0.0.2 ping statistics --- 00:09:55.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.790 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:55.790 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:55.790 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:09:55.790 00:09:55.790 --- 10.0.0.1 ping statistics --- 00:09:55.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:55.790 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3316708 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3316708 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 3316708 ']' 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:55.790 23:47:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:55.790 EAL: No free 2048 kB hugepages reported on node 1 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:57.160 23:47:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:57.160 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.118 Initializing NVMe Controllers 00:10:07.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:07.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:07.118 Initialization complete. Launching workers. 00:10:07.118 ======================================================== 00:10:07.118 Latency(us) 00:10:07.118 Device Information : IOPS MiB/s Average min max 00:10:07.118 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15003.63 58.61 4265.16 715.52 15425.08 00:10:07.118 ======================================================== 00:10:07.118 Total : 15003.63 58.61 4265.16 715.52 15425.08 00:10:07.118 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:07.118 rmmod nvme_tcp 00:10:07.118 rmmod nvme_fabrics 00:10:07.118 rmmod nvme_keyring 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 3316708 ']' 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 3316708 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 3316708 ']' 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 3316708 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3316708 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3316708' 00:10:07.118 killing process with pid 3316708 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 3316708 00:10:07.118 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 3316708 00:10:07.375 nvmf threads initialize successfully 00:10:07.375 bdev subsystem init successfully 00:10:07.375 created a nvmf target service 00:10:07.375 create targets's poll groups done 00:10:07.375 all subsystems of target started 00:10:07.375 nvmf target is running 00:10:07.375 all subsystems of target stopped 00:10:07.375 destroy targets's poll groups done 00:10:07.375 destroyed the nvmf target service 00:10:07.375 bdev subsystem finish successfully 00:10:07.375 nvmf threads destroy successfully 00:10:07.375 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:07.375 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:07.375 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:07.375 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:07.375 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:07.375 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.375 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.375 23:47:37 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.907 23:47:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:09.907 00:10:09.907 real 0m15.962s 00:10:09.907 user 0m45.251s 00:10:09.907 sys 0m3.249s 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:09.907 ************************************ 00:10:09.907 END TEST nvmf_example 00:10:09.907 ************************************ 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:09.907 ************************************ 00:10:09.907 START TEST nvmf_filesystem 00:10:09.907 ************************************ 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:09.907 * Looking for test storage... 00:10:09.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:09.907 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:09.908 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:09.908 #define SPDK_CONFIG_H 00:10:09.908 #define SPDK_CONFIG_APPS 1 00:10:09.908 #define SPDK_CONFIG_ARCH native 00:10:09.908 #undef SPDK_CONFIG_ASAN 00:10:09.908 #undef SPDK_CONFIG_AVAHI 00:10:09.908 #undef SPDK_CONFIG_CET 00:10:09.908 #define SPDK_CONFIG_COVERAGE 1 00:10:09.908 #define SPDK_CONFIG_CROSS_PREFIX 00:10:09.908 #undef SPDK_CONFIG_CRYPTO 00:10:09.908 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:09.908 #undef SPDK_CONFIG_CUSTOMOCF 00:10:09.908 #undef SPDK_CONFIG_DAOS 00:10:09.908 #define SPDK_CONFIG_DAOS_DIR 00:10:09.908 #define SPDK_CONFIG_DEBUG 1 00:10:09.908 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:09.908 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:09.908 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:09.908 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:09.908 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:09.908 #undef SPDK_CONFIG_DPDK_UADK 00:10:09.908 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:09.908 #define SPDK_CONFIG_EXAMPLES 1 00:10:09.908 #undef SPDK_CONFIG_FC 00:10:09.908 #define SPDK_CONFIG_FC_PATH 00:10:09.908 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:09.908 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:09.908 #undef SPDK_CONFIG_FUSE 00:10:09.908 #undef SPDK_CONFIG_FUZZER 00:10:09.908 #define SPDK_CONFIG_FUZZER_LIB 00:10:09.908 #undef SPDK_CONFIG_GOLANG 00:10:09.908 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:09.908 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:09.908 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:09.908 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:09.908 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:09.908 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:09.908 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:09.908 #define SPDK_CONFIG_IDXD 1 00:10:09.908 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:09.908 #undef SPDK_CONFIG_IPSEC_MB 00:10:09.908 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:09.908 #define SPDK_CONFIG_ISAL 1 00:10:09.909 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:09.909 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:09.909 #define SPDK_CONFIG_LIBDIR 00:10:09.909 #undef SPDK_CONFIG_LTO 00:10:09.909 #define SPDK_CONFIG_MAX_LCORES 128 00:10:09.909 #define SPDK_CONFIG_NVME_CUSE 1 00:10:09.909 #undef SPDK_CONFIG_OCF 00:10:09.909 #define SPDK_CONFIG_OCF_PATH 00:10:09.909 #define SPDK_CONFIG_OPENSSL_PATH 00:10:09.909 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:09.909 #define SPDK_CONFIG_PGO_DIR 00:10:09.909 #undef SPDK_CONFIG_PGO_USE 00:10:09.909 #define SPDK_CONFIG_PREFIX /usr/local 00:10:09.909 #undef SPDK_CONFIG_RAID5F 00:10:09.909 #undef SPDK_CONFIG_RBD 00:10:09.909 #define SPDK_CONFIG_RDMA 1 00:10:09.909 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:09.909 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:09.909 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:09.909 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:09.909 #define SPDK_CONFIG_SHARED 1 00:10:09.909 #undef SPDK_CONFIG_SMA 00:10:09.909 #define SPDK_CONFIG_TESTS 1 00:10:09.909 #undef SPDK_CONFIG_TSAN 00:10:09.909 #define SPDK_CONFIG_UBLK 1 00:10:09.909 #define SPDK_CONFIG_UBSAN 1 00:10:09.909 #undef SPDK_CONFIG_UNIT_TESTS 00:10:09.909 #undef SPDK_CONFIG_URING 00:10:09.909 #define SPDK_CONFIG_URING_PATH 00:10:09.909 #undef SPDK_CONFIG_URING_ZNS 00:10:09.909 #undef SPDK_CONFIG_USDT 00:10:09.909 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:09.909 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:09.909 #define SPDK_CONFIG_VFIO_USER 1 00:10:09.909 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:09.909 #define SPDK_CONFIG_VHOST 1 00:10:09.909 #define SPDK_CONFIG_VIRTIO 1 00:10:09.909 #undef SPDK_CONFIG_VTUNE 00:10:09.909 #define SPDK_CONFIG_VTUNE_DIR 00:10:09.909 #define SPDK_CONFIG_WERROR 1 00:10:09.909 #define SPDK_CONFIG_WPDK_DIR 00:10:09.909 #undef SPDK_CONFIG_XNVME 00:10:09.909 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:09.909 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:10:09.910 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 3318407 ]] 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 3318407 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.BV2zke 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.BV2zke/tests/target /tmp/spdk.BV2zke 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=55559581696 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994729472 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6435147776 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30987444224 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997364736 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9920512 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12376539136 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398948352 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=22409216 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996639744 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997364736 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=724992 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199468032 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199472128 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:10:09.911 * Looking for test storage... 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=55559581696 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:10:09.911 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8649740288 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.912 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:10:09.912 23:47:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:11.812 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:11.812 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:11.812 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:11.813 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:11.813 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:11.813 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.813 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:10:11.813 00:10:11.813 --- 10.0.0.2 ping statistics --- 00:10:11.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.813 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.813 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.813 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:10:11.813 00:10:11.813 --- 10.0.0.1 ping statistics --- 00:10:11.813 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.813 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:11.813 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.071 ************************************ 00:10:12.071 START TEST nvmf_filesystem_no_in_capsule 00:10:12.071 ************************************ 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3320030 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3320030 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3320030 ']' 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:12.071 23:47:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:12.071 [2024-07-24 23:47:42.488483] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:10:12.071 [2024-07-24 23:47:42.488570] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.071 EAL: No free 2048 kB hugepages reported on node 1 00:10:12.071 [2024-07-24 23:47:42.557582] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.071 [2024-07-24 23:47:42.679598] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.071 [2024-07-24 23:47:42.679660] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.071 [2024-07-24 23:47:42.679677] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.071 [2024-07-24 23:47:42.679690] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.071 [2024-07-24 23:47:42.679702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.071 [2024-07-24 23:47:42.679771] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.071 [2024-07-24 23:47:42.679841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.071 [2024-07-24 23:47:42.679939] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.071 [2024-07-24 23:47:42.679942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.003 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:13.003 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:10:13.003 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:13.003 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:13.003 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.003 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.003 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:13.003 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:13.003 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.003 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.003 [2024-07-24 23:47:43.474782] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.003 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.003 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:13.003 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.003 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.261 Malloc1 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.261 [2024-07-24 23:47:43.641204] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:10:13.261 { 00:10:13.261 "name": "Malloc1", 00:10:13.261 "aliases": [ 00:10:13.261 "8b4c32ff-b467-40df-ad09-f052a51226f4" 00:10:13.261 ], 00:10:13.261 "product_name": "Malloc disk", 00:10:13.261 "block_size": 512, 00:10:13.261 "num_blocks": 1048576, 00:10:13.261 "uuid": "8b4c32ff-b467-40df-ad09-f052a51226f4", 00:10:13.261 "assigned_rate_limits": { 00:10:13.261 "rw_ios_per_sec": 0, 00:10:13.261 "rw_mbytes_per_sec": 0, 00:10:13.261 "r_mbytes_per_sec": 0, 00:10:13.261 "w_mbytes_per_sec": 0 00:10:13.261 }, 00:10:13.261 "claimed": true, 00:10:13.261 "claim_type": "exclusive_write", 00:10:13.261 "zoned": false, 00:10:13.261 "supported_io_types": { 00:10:13.261 "read": true, 00:10:13.261 "write": true, 00:10:13.261 "unmap": true, 00:10:13.261 "flush": true, 00:10:13.261 "reset": true, 00:10:13.261 "nvme_admin": false, 00:10:13.261 "nvme_io": false, 00:10:13.261 "nvme_io_md": false, 00:10:13.261 "write_zeroes": true, 00:10:13.261 "zcopy": true, 00:10:13.261 "get_zone_info": false, 00:10:13.261 "zone_management": false, 00:10:13.261 "zone_append": false, 00:10:13.261 "compare": false, 00:10:13.261 "compare_and_write": false, 00:10:13.261 "abort": true, 00:10:13.261 "seek_hole": false, 00:10:13.261 "seek_data": false, 00:10:13.261 "copy": true, 00:10:13.261 "nvme_iov_md": false 00:10:13.261 }, 00:10:13.261 "memory_domains": [ 00:10:13.261 { 00:10:13.261 "dma_device_id": "system", 00:10:13.261 "dma_device_type": 1 00:10:13.261 }, 00:10:13.261 { 00:10:13.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:13.261 "dma_device_type": 2 00:10:13.261 } 00:10:13.261 ], 00:10:13.261 "driver_specific": {} 00:10:13.261 } 00:10:13.261 ]' 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:13.261 23:47:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:13.825 23:47:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:13.825 23:47:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:10:13.825 23:47:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:13.825 23:47:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:13.825 23:47:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:10:15.718 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:15.718 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:15.718 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:15.975 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:15.976 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:15.976 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:10:15.976 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:15.976 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:15.976 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:15.976 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:15.976 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:15.976 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:15.976 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:15.976 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:15.976 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:15.976 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:15.976 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:15.976 23:47:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:16.904 23:47:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:17.834 ************************************ 00:10:17.834 START TEST filesystem_ext4 00:10:17.834 ************************************ 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:10:17.834 23:47:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:17.834 mke2fs 1.46.5 (30-Dec-2021) 00:10:17.834 Discarding device blocks: 0/522240 done 00:10:17.834 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:17.835 Filesystem UUID: 79f8315e-ce21-4f64-9d9a-5288431633a4 00:10:17.835 Superblock backups stored on blocks: 00:10:17.835 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:17.835 00:10:17.835 Allocating group tables: 0/64 done 00:10:17.835 Writing inode tables: 0/64 done 00:10:18.096 Creating journal (8192 blocks): done 00:10:18.962 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:10:18.962 00:10:18.962 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:10:18.962 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:18.962 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3320030 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:19.219 00:10:19.219 real 0m1.471s 00:10:19.219 user 0m0.022s 00:10:19.219 sys 0m0.050s 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:19.219 ************************************ 00:10:19.219 END TEST filesystem_ext4 00:10:19.219 ************************************ 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.219 ************************************ 00:10:19.219 START TEST filesystem_btrfs 00:10:19.219 ************************************ 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:10:19.219 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:19.477 btrfs-progs v6.6.2 00:10:19.477 See https://btrfs.readthedocs.io for more information. 00:10:19.477 00:10:19.477 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:19.477 NOTE: several default settings have changed in version 5.15, please make sure 00:10:19.477 this does not affect your deployments: 00:10:19.477 - DUP for metadata (-m dup) 00:10:19.477 - enabled no-holes (-O no-holes) 00:10:19.477 - enabled free-space-tree (-R free-space-tree) 00:10:19.477 00:10:19.477 Label: (null) 00:10:19.477 UUID: a6e85e9b-ec66-444f-ab8f-b63ddd91eb61 00:10:19.477 Node size: 16384 00:10:19.477 Sector size: 4096 00:10:19.477 Filesystem size: 510.00MiB 00:10:19.477 Block group profiles: 00:10:19.477 Data: single 8.00MiB 00:10:19.477 Metadata: DUP 32.00MiB 00:10:19.477 System: DUP 8.00MiB 00:10:19.477 SSD detected: yes 00:10:19.477 Zoned device: no 00:10:19.477 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:19.477 Runtime features: free-space-tree 00:10:19.477 Checksum: crc32c 00:10:19.477 Number of devices: 1 00:10:19.477 Devices: 00:10:19.477 ID SIZE PATH 00:10:19.477 1 510.00MiB /dev/nvme0n1p1 00:10:19.477 00:10:19.477 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:10:19.477 23:47:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3320030 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:19.734 00:10:19.734 real 0m0.534s 00:10:19.734 user 0m0.025s 00:10:19.734 sys 0m0.106s 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:19.734 ************************************ 00:10:19.734 END TEST filesystem_btrfs 00:10:19.734 ************************************ 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:19.734 ************************************ 00:10:19.734 START TEST filesystem_xfs 00:10:19.734 ************************************ 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:10:19.734 23:47:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:19.992 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:19.992 = sectsz=512 attr=2, projid32bit=1 00:10:19.992 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:19.992 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:19.992 data = bsize=4096 blocks=130560, imaxpct=25 00:10:19.992 = sunit=0 swidth=0 blks 00:10:19.992 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:19.992 log =internal log bsize=4096 blocks=16384, version=2 00:10:19.992 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:19.992 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:20.924 Discarding blocks...Done. 00:10:20.924 23:47:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:10:20.924 23:47:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3320030 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:23.445 00:10:23.445 real 0m3.320s 00:10:23.445 user 0m0.012s 00:10:23.445 sys 0m0.058s 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:23.445 ************************************ 00:10:23.445 END TEST filesystem_xfs 00:10:23.445 ************************************ 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.445 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.445 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3320030 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3320030 ']' 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3320030 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3320030 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3320030' 00:10:23.446 killing process with pid 3320030 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 3320030 00:10:23.446 23:47:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 3320030 00:10:23.703 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:23.703 00:10:23.703 real 0m11.855s 00:10:23.703 user 0m45.540s 00:10:23.703 sys 0m1.745s 00:10:23.703 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:23.703 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.703 ************************************ 00:10:23.703 END TEST nvmf_filesystem_no_in_capsule 00:10:23.703 ************************************ 00:10:23.961 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:23.961 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:23.962 ************************************ 00:10:23.962 START TEST nvmf_filesystem_in_capsule 00:10:23.962 ************************************ 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=3321596 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 3321596 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 3321596 ']' 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:23.962 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:23.962 [2024-07-24 23:47:54.394876] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:10:23.962 [2024-07-24 23:47:54.394972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.962 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.962 [2024-07-24 23:47:54.463850] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.219 [2024-07-24 23:47:54.575896] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.219 [2024-07-24 23:47:54.575946] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.219 [2024-07-24 23:47:54.575961] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.219 [2024-07-24 23:47:54.575973] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.219 [2024-07-24 23:47:54.575986] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.219 [2024-07-24 23:47:54.576084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.219 [2024-07-24 23:47:54.576138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.219 [2024-07-24 23:47:54.576142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.219 [2024-07-24 23:47:54.576110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.219 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:24.219 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:10:24.219 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:24.219 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:24.219 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.219 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.219 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:24.219 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:24.219 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.219 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.219 [2024-07-24 23:47:54.724548] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.219 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.220 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:24.220 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.220 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.477 Malloc1 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.477 [2024-07-24 23:47:54.913199] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_name=Malloc1 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bdev_info 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bs 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local nb 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bdev_info='[ 00:10:24.477 { 00:10:24.477 "name": "Malloc1", 00:10:24.477 "aliases": [ 00:10:24.477 "6cb50e4b-af47-4be9-8f6f-00baf9f7fd58" 00:10:24.477 ], 00:10:24.477 "product_name": "Malloc disk", 00:10:24.477 "block_size": 512, 00:10:24.477 "num_blocks": 1048576, 00:10:24.477 "uuid": "6cb50e4b-af47-4be9-8f6f-00baf9f7fd58", 00:10:24.477 "assigned_rate_limits": { 00:10:24.477 "rw_ios_per_sec": 0, 00:10:24.477 "rw_mbytes_per_sec": 0, 00:10:24.477 "r_mbytes_per_sec": 0, 00:10:24.477 "w_mbytes_per_sec": 0 00:10:24.477 }, 00:10:24.477 "claimed": true, 00:10:24.477 "claim_type": "exclusive_write", 00:10:24.477 "zoned": false, 00:10:24.477 "supported_io_types": { 00:10:24.477 "read": true, 00:10:24.477 "write": true, 00:10:24.477 "unmap": true, 00:10:24.477 "flush": true, 00:10:24.477 "reset": true, 00:10:24.477 "nvme_admin": false, 00:10:24.477 "nvme_io": false, 00:10:24.477 "nvme_io_md": false, 00:10:24.477 "write_zeroes": true, 00:10:24.477 "zcopy": true, 00:10:24.477 "get_zone_info": false, 00:10:24.477 "zone_management": false, 00:10:24.477 "zone_append": false, 00:10:24.477 "compare": false, 00:10:24.477 "compare_and_write": false, 00:10:24.477 "abort": true, 00:10:24.477 "seek_hole": false, 00:10:24.477 "seek_data": false, 00:10:24.477 "copy": true, 00:10:24.477 "nvme_iov_md": false 00:10:24.477 }, 00:10:24.477 "memory_domains": [ 00:10:24.477 { 00:10:24.477 "dma_device_id": "system", 00:10:24.477 "dma_device_type": 1 00:10:24.477 }, 00:10:24.477 { 00:10:24.477 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.477 "dma_device_type": 2 00:10:24.477 } 00:10:24.477 ], 00:10:24.477 "driver_specific": {} 00:10:24.477 } 00:10:24.477 ]' 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .block_size' 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # bs=512 00:10:24.477 23:47:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # jq '.[] .num_blocks' 00:10:24.477 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # nb=1048576 00:10:24.477 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bdev_size=512 00:10:24.477 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # echo 512 00:10:24.477 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:24.477 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:25.408 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:25.408 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local i=0 00:10:25.408 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:10:25.408 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:10:25.408 23:47:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # sleep 2 00:10:27.302 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:10:27.302 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:10:27.302 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:10:27.302 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:10:27.302 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:10:27.302 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # return 0 00:10:27.303 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:27.303 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:27.303 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:27.303 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:27.303 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:27.303 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:27.303 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:27.303 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:27.303 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:27.303 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:27.303 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:27.303 23:47:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:28.234 23:47:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:29.604 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:29.604 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:29.604 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:29.604 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:29.604 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:29.604 ************************************ 00:10:29.604 START TEST filesystem_in_capsule_ext4 00:10:29.604 ************************************ 00:10:29.605 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:29.605 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:29.605 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:29.605 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:29.605 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:10:29.605 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:29.605 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:10:29.605 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:10:29.605 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:10:29.605 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:10:29.605 23:47:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:29.605 mke2fs 1.46.5 (30-Dec-2021) 00:10:29.605 Discarding device blocks: 0/522240 done 00:10:29.605 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:29.605 Filesystem UUID: 51b0bfbe-1a66-4be2-aa7d-bd7eb120b059 00:10:29.605 Superblock backups stored on blocks: 00:10:29.605 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:29.605 00:10:29.605 Allocating group tables: 0/64 done 00:10:29.605 Writing inode tables: 0/64 done 00:10:29.605 Creating journal (8192 blocks): done 00:10:30.683 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:10:30.683 00:10:30.683 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:10:30.683 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:31.246 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3321596 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:31.247 00:10:31.247 real 0m1.835s 00:10:31.247 user 0m0.017s 00:10:31.247 sys 0m0.053s 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:31.247 ************************************ 00:10:31.247 END TEST filesystem_in_capsule_ext4 00:10:31.247 ************************************ 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:31.247 ************************************ 00:10:31.247 START TEST filesystem_in_capsule_btrfs 00:10:31.247 ************************************ 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:10:31.247 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:31.503 btrfs-progs v6.6.2 00:10:31.503 See https://btrfs.readthedocs.io for more information. 00:10:31.503 00:10:31.503 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:31.503 NOTE: several default settings have changed in version 5.15, please make sure 00:10:31.503 this does not affect your deployments: 00:10:31.503 - DUP for metadata (-m dup) 00:10:31.503 - enabled no-holes (-O no-holes) 00:10:31.503 - enabled free-space-tree (-R free-space-tree) 00:10:31.503 00:10:31.503 Label: (null) 00:10:31.503 UUID: 510fc7cb-1634-445e-b865-60908369eb09 00:10:31.503 Node size: 16384 00:10:31.503 Sector size: 4096 00:10:31.503 Filesystem size: 510.00MiB 00:10:31.503 Block group profiles: 00:10:31.503 Data: single 8.00MiB 00:10:31.503 Metadata: DUP 32.00MiB 00:10:31.503 System: DUP 8.00MiB 00:10:31.503 SSD detected: yes 00:10:31.503 Zoned device: no 00:10:31.503 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:31.503 Runtime features: free-space-tree 00:10:31.503 Checksum: crc32c 00:10:31.503 Number of devices: 1 00:10:31.503 Devices: 00:10:31.503 ID SIZE PATH 00:10:31.503 1 510.00MiB /dev/nvme0n1p1 00:10:31.503 00:10:31.503 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:10:31.503 23:48:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:31.760 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:31.760 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:31.760 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:31.760 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:31.760 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:31.760 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:31.760 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3321596 00:10:31.761 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:31.761 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:31.761 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:31.761 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:31.761 00:10:31.761 real 0m0.634s 00:10:31.761 user 0m0.016s 00:10:31.761 sys 0m0.111s 00:10:31.761 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.761 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:31.761 ************************************ 00:10:31.761 END TEST filesystem_in_capsule_btrfs 00:10:31.761 ************************************ 00:10:31.761 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:31.761 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:31.761 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.761 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:32.018 ************************************ 00:10:32.018 START TEST filesystem_in_capsule_xfs 00:10:32.018 ************************************ 00:10:32.018 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:10:32.018 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:32.018 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:32.018 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:32.018 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:10:32.018 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:10:32.018 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:10:32.018 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:10:32.018 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:10:32.018 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:10:32.018 23:48:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:32.018 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:32.018 = sectsz=512 attr=2, projid32bit=1 00:10:32.018 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:32.018 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:32.018 data = bsize=4096 blocks=130560, imaxpct=25 00:10:32.018 = sunit=0 swidth=0 blks 00:10:32.018 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:32.018 log =internal log bsize=4096 blocks=16384, version=2 00:10:32.018 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:32.018 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:32.948 Discarding blocks...Done. 00:10:32.948 23:48:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:10:32.948 23:48:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:35.471 23:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:35.471 23:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:35.471 23:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:35.471 23:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:35.471 23:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:35.471 23:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:35.471 23:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3321596 00:10:35.471 23:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:35.471 23:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:35.471 23:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:35.471 23:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:35.471 00:10:35.471 real 0m3.417s 00:10:35.471 user 0m0.016s 00:10:35.471 sys 0m0.065s 00:10:35.471 23:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:35.471 23:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:35.471 ************************************ 00:10:35.471 END TEST filesystem_in_capsule_xfs 00:10:35.471 ************************************ 00:10:35.471 23:48:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.760 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # local i=0 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # return 0 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3321596 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 3321596 ']' 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 3321596 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3321596 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3321596' 00:10:35.760 killing process with pid 3321596 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 3321596 00:10:35.760 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 3321596 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:36.325 00:10:36.325 real 0m12.364s 00:10:36.325 user 0m47.223s 00:10:36.325 sys 0m1.831s 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:36.325 ************************************ 00:10:36.325 END TEST nvmf_filesystem_in_capsule 00:10:36.325 ************************************ 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:36.325 rmmod nvme_tcp 00:10:36.325 rmmod nvme_fabrics 00:10:36.325 rmmod nvme_keyring 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.325 23:48:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.225 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:38.225 00:10:38.225 real 0m28.755s 00:10:38.225 user 1m33.724s 00:10:38.225 sys 0m5.144s 00:10:38.225 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:38.225 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:38.225 ************************************ 00:10:38.225 END TEST nvmf_filesystem 00:10:38.225 ************************************ 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:38.483 ************************************ 00:10:38.483 START TEST nvmf_target_discovery 00:10:38.483 ************************************ 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:38.483 * Looking for test storage... 00:10:38.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:38.483 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:38.484 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.484 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.484 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.484 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:38.484 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:38.484 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:10:38.484 23:48:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.383 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.383 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:40.384 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:40.384 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:40.384 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:40.384 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:40.384 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:40.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:10:40.385 00:10:40.385 --- 10.0.0.2 ping statistics --- 00:10:40.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.385 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:40.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:10:40.385 00:10:40.385 --- 10.0.0.1 ping statistics --- 00:10:40.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.385 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:40.385 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.643 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=3325805 00:10:40.643 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:40.643 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 3325805 00:10:40.643 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 3325805 ']' 00:10:40.643 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.643 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:40.643 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.643 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:40.643 23:48:10 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:40.643 [2024-07-24 23:48:11.047394] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:10:40.643 [2024-07-24 23:48:11.047472] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.643 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.643 [2024-07-24 23:48:11.116985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:40.643 [2024-07-24 23:48:11.238924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.643 [2024-07-24 23:48:11.238978] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.643 [2024-07-24 23:48:11.238995] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.643 [2024-07-24 23:48:11.239009] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.643 [2024-07-24 23:48:11.239030] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.643 [2024-07-24 23:48:11.239114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.643 [2024-07-24 23:48:11.239171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:40.643 [2024-07-24 23:48:11.239219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:40.643 [2024-07-24 23:48:11.239222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:41.576 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:41.576 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:10:41.576 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:41.576 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:41.576 23:48:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.576 [2024-07-24 23:48:12.016000] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.576 Null1 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.576 [2024-07-24 23:48:12.056310] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.576 Null2 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:41.576 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 Null3 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 Null4 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.577 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:41.835 00:10:41.835 Discovery Log Number of Records 6, Generation counter 6 00:10:41.835 =====Discovery Log Entry 0====== 00:10:41.835 trtype: tcp 00:10:41.835 adrfam: ipv4 00:10:41.835 subtype: current discovery subsystem 00:10:41.835 treq: not required 00:10:41.835 portid: 0 00:10:41.835 trsvcid: 4420 00:10:41.835 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:41.835 traddr: 10.0.0.2 00:10:41.835 eflags: explicit discovery connections, duplicate discovery information 00:10:41.835 sectype: none 00:10:41.835 =====Discovery Log Entry 1====== 00:10:41.835 trtype: tcp 00:10:41.835 adrfam: ipv4 00:10:41.835 subtype: nvme subsystem 00:10:41.835 treq: not required 00:10:41.835 portid: 0 00:10:41.835 trsvcid: 4420 00:10:41.835 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:41.835 traddr: 10.0.0.2 00:10:41.835 eflags: none 00:10:41.835 sectype: none 00:10:41.835 =====Discovery Log Entry 2====== 00:10:41.835 trtype: tcp 00:10:41.835 adrfam: ipv4 00:10:41.835 subtype: nvme subsystem 00:10:41.835 treq: not required 00:10:41.835 portid: 0 00:10:41.835 trsvcid: 4420 00:10:41.835 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:41.835 traddr: 10.0.0.2 00:10:41.835 eflags: none 00:10:41.835 sectype: none 00:10:41.835 =====Discovery Log Entry 3====== 00:10:41.835 trtype: tcp 00:10:41.835 adrfam: ipv4 00:10:41.835 subtype: nvme subsystem 00:10:41.835 treq: not required 00:10:41.835 portid: 0 00:10:41.835 trsvcid: 4420 00:10:41.835 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:41.835 traddr: 10.0.0.2 00:10:41.835 eflags: none 00:10:41.835 sectype: none 00:10:41.835 =====Discovery Log Entry 4====== 00:10:41.835 trtype: tcp 00:10:41.835 adrfam: ipv4 00:10:41.835 subtype: nvme subsystem 00:10:41.835 treq: not required 00:10:41.835 portid: 0 00:10:41.835 trsvcid: 4420 00:10:41.835 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:41.835 traddr: 10.0.0.2 00:10:41.835 eflags: none 00:10:41.835 sectype: none 00:10:41.835 =====Discovery Log Entry 5====== 00:10:41.835 trtype: tcp 00:10:41.835 adrfam: ipv4 00:10:41.835 subtype: discovery subsystem referral 00:10:41.835 treq: not required 00:10:41.835 portid: 0 00:10:41.835 trsvcid: 4430 00:10:41.835 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:41.835 traddr: 10.0.0.2 00:10:41.835 eflags: none 00:10:41.835 sectype: none 00:10:41.835 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:41.835 Perform nvmf subsystem discovery via RPC 00:10:41.835 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:41.835 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.835 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.835 [ 00:10:41.835 { 00:10:41.835 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:41.835 "subtype": "Discovery", 00:10:41.835 "listen_addresses": [ 00:10:41.835 { 00:10:41.835 "trtype": "TCP", 00:10:41.835 "adrfam": "IPv4", 00:10:41.835 "traddr": "10.0.0.2", 00:10:41.835 "trsvcid": "4420" 00:10:41.835 } 00:10:41.835 ], 00:10:41.835 "allow_any_host": true, 00:10:41.835 "hosts": [] 00:10:41.835 }, 00:10:41.835 { 00:10:41.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:41.835 "subtype": "NVMe", 00:10:41.835 "listen_addresses": [ 00:10:41.835 { 00:10:41.835 "trtype": "TCP", 00:10:41.835 "adrfam": "IPv4", 00:10:41.835 "traddr": "10.0.0.2", 00:10:41.835 "trsvcid": "4420" 00:10:41.835 } 00:10:41.835 ], 00:10:41.835 "allow_any_host": true, 00:10:41.835 "hosts": [], 00:10:41.835 "serial_number": "SPDK00000000000001", 00:10:41.835 "model_number": "SPDK bdev Controller", 00:10:41.835 "max_namespaces": 32, 00:10:41.835 "min_cntlid": 1, 00:10:41.835 "max_cntlid": 65519, 00:10:41.835 "namespaces": [ 00:10:41.835 { 00:10:41.835 "nsid": 1, 00:10:41.835 "bdev_name": "Null1", 00:10:41.835 "name": "Null1", 00:10:41.835 "nguid": "E28883B9C66D4589A0AC2A37C0BD2BF6", 00:10:41.835 "uuid": "e28883b9-c66d-4589-a0ac-2a37c0bd2bf6" 00:10:41.835 } 00:10:41.835 ] 00:10:41.835 }, 00:10:41.835 { 00:10:41.835 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:41.835 "subtype": "NVMe", 00:10:41.835 "listen_addresses": [ 00:10:41.835 { 00:10:41.835 "trtype": "TCP", 00:10:41.835 "adrfam": "IPv4", 00:10:41.835 "traddr": "10.0.0.2", 00:10:41.835 "trsvcid": "4420" 00:10:41.835 } 00:10:41.835 ], 00:10:41.835 "allow_any_host": true, 00:10:41.835 "hosts": [], 00:10:41.835 "serial_number": "SPDK00000000000002", 00:10:41.835 "model_number": "SPDK bdev Controller", 00:10:41.835 "max_namespaces": 32, 00:10:41.835 "min_cntlid": 1, 00:10:41.835 "max_cntlid": 65519, 00:10:41.835 "namespaces": [ 00:10:41.835 { 00:10:41.835 "nsid": 1, 00:10:41.835 "bdev_name": "Null2", 00:10:41.835 "name": "Null2", 00:10:41.835 "nguid": "BB4557567CA64AB08001533F2B0D4F49", 00:10:41.835 "uuid": "bb455756-7ca6-4ab0-8001-533f2b0d4f49" 00:10:41.835 } 00:10:41.835 ] 00:10:41.835 }, 00:10:41.835 { 00:10:41.835 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:41.835 "subtype": "NVMe", 00:10:41.835 "listen_addresses": [ 00:10:41.835 { 00:10:41.835 "trtype": "TCP", 00:10:41.835 "adrfam": "IPv4", 00:10:41.835 "traddr": "10.0.0.2", 00:10:41.835 "trsvcid": "4420" 00:10:41.835 } 00:10:41.835 ], 00:10:41.835 "allow_any_host": true, 00:10:41.835 "hosts": [], 00:10:41.835 "serial_number": "SPDK00000000000003", 00:10:41.835 "model_number": "SPDK bdev Controller", 00:10:41.835 "max_namespaces": 32, 00:10:41.835 "min_cntlid": 1, 00:10:41.835 "max_cntlid": 65519, 00:10:41.835 "namespaces": [ 00:10:41.835 { 00:10:41.835 "nsid": 1, 00:10:41.835 "bdev_name": "Null3", 00:10:41.835 "name": "Null3", 00:10:41.835 "nguid": "ADC5BA160CC54118A7D90D7EBED14D75", 00:10:41.835 "uuid": "adc5ba16-0cc5-4118-a7d9-0d7ebed14d75" 00:10:41.835 } 00:10:41.835 ] 00:10:41.835 }, 00:10:41.835 { 00:10:41.835 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:41.835 "subtype": "NVMe", 00:10:41.835 "listen_addresses": [ 00:10:41.835 { 00:10:41.835 "trtype": "TCP", 00:10:41.835 "adrfam": "IPv4", 00:10:41.835 "traddr": "10.0.0.2", 00:10:41.835 "trsvcid": "4420" 00:10:41.835 } 00:10:41.835 ], 00:10:41.835 "allow_any_host": true, 00:10:41.835 "hosts": [], 00:10:41.835 "serial_number": "SPDK00000000000004", 00:10:41.835 "model_number": "SPDK bdev Controller", 00:10:41.835 "max_namespaces": 32, 00:10:41.835 "min_cntlid": 1, 00:10:41.835 "max_cntlid": 65519, 00:10:41.835 "namespaces": [ 00:10:41.835 { 00:10:41.835 "nsid": 1, 00:10:41.835 "bdev_name": "Null4", 00:10:41.835 "name": "Null4", 00:10:41.835 "nguid": "AE09A010DE214AAEBDEFBC29DE1CE6E2", 00:10:41.835 "uuid": "ae09a010-de21-4aae-bdef-bc29de1ce6e2" 00:10:41.835 } 00:10:41.835 ] 00:10:41.835 } 00:10:41.835 ] 00:10:41.835 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.835 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:41.835 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:41.835 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:41.835 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:41.836 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:41.836 rmmod nvme_tcp 00:10:42.094 rmmod nvme_fabrics 00:10:42.094 rmmod nvme_keyring 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 3325805 ']' 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 3325805 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 3325805 ']' 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 3325805 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3325805 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3325805' 00:10:42.094 killing process with pid 3325805 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 3325805 00:10:42.094 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 3325805 00:10:42.352 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:42.352 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:42.352 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:42.352 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:42.352 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:42.352 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.352 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:42.352 23:48:12 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.253 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:44.253 00:10:44.253 real 0m5.974s 00:10:44.253 user 0m7.016s 00:10:44.253 sys 0m1.815s 00:10:44.253 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.253 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:44.253 ************************************ 00:10:44.253 END TEST nvmf_target_discovery 00:10:44.253 ************************************ 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:44.511 ************************************ 00:10:44.511 START TEST nvmf_referrals 00:10:44.511 ************************************ 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:44.511 * Looking for test storage... 00:10:44.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:44.511 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:44.512 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:10:44.512 23:48:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:46.409 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:46.409 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:46.409 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:46.409 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:46.409 23:48:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:46.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:46.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:10:46.668 00:10:46.668 --- 10.0.0.2 ping statistics --- 00:10:46.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.668 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:46.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:46.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:10:46.668 00:10:46.668 --- 10.0.0.1 ping statistics --- 00:10:46.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:46.668 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=3327901 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 3327901 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 3327901 ']' 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:46.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:46.668 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.668 [2024-07-24 23:48:17.170537] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:10:46.668 [2024-07-24 23:48:17.170625] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:46.668 EAL: No free 2048 kB hugepages reported on node 1 00:10:46.668 [2024-07-24 23:48:17.234668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:46.926 [2024-07-24 23:48:17.345685] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:46.926 [2024-07-24 23:48:17.345739] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:46.926 [2024-07-24 23:48:17.345753] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:46.926 [2024-07-24 23:48:17.345765] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:46.926 [2024-07-24 23:48:17.345775] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:46.926 [2024-07-24 23:48:17.345846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:46.926 [2024-07-24 23:48:17.345922] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:46.926 [2024-07-24 23:48:17.345981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:46.926 [2024-07-24 23:48:17.345984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.926 [2024-07-24 23:48:17.500561] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.926 [2024-07-24 23:48:17.512813] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:46.926 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.183 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:47.441 23:48:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.441 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:47.441 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:47.441 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:47.441 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:47.441 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:47.441 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:47.441 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:47.441 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:47.699 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:47.699 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:47.699 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:47.699 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:47.699 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:47.699 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:47.699 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:47.956 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:48.213 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:48.213 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:48.213 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:48.213 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:48.213 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:48.213 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:48.213 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:48.470 23:48:18 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:48.470 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:48.470 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:48.470 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:48.470 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:48.470 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:48.471 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:48.471 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:48.728 rmmod nvme_tcp 00:10:48.728 rmmod nvme_fabrics 00:10:48.728 rmmod nvme_keyring 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 3327901 ']' 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 3327901 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 3327901 ']' 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 3327901 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3327901 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3327901' 00:10:48.728 killing process with pid 3327901 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 3327901 00:10:48.728 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 3327901 00:10:48.986 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:48.986 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:48.986 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:48.986 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:48.986 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:48.986 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.986 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.986 23:48:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.515 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:51.515 00:10:51.515 real 0m6.641s 00:10:51.515 user 0m9.786s 00:10:51.515 sys 0m2.093s 00:10:51.515 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:51.515 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:51.515 ************************************ 00:10:51.515 END TEST nvmf_referrals 00:10:51.515 ************************************ 00:10:51.515 23:48:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:51.515 23:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:51.515 23:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:51.515 23:48:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:51.515 ************************************ 00:10:51.515 START TEST nvmf_connect_disconnect 00:10:51.515 ************************************ 00:10:51.515 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:51.515 * Looking for test storage... 00:10:51.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.515 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.515 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:51.515 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.515 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.515 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:10:51.516 23:48:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:53.430 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:53.431 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:53.431 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:53.431 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:53.431 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:53.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:10:53.431 00:10:53.431 --- 10.0.0.2 ping statistics --- 00:10:53.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.431 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:10:53.431 00:10:53.431 --- 10.0.0.1 ping statistics --- 00:10:53.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.431 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=3330190 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 3330190 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 3330190 ']' 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:53.431 23:48:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:53.431 [2024-07-24 23:48:23.946110] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:10:53.431 [2024-07-24 23:48:23.946206] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.431 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.431 [2024-07-24 23:48:24.020017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:53.689 [2024-07-24 23:48:24.148135] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:53.689 [2024-07-24 23:48:24.148188] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:53.689 [2024-07-24 23:48:24.148205] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:53.689 [2024-07-24 23:48:24.148219] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:53.689 [2024-07-24 23:48:24.148231] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:53.689 [2024-07-24 23:48:24.150268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:53.689 [2024-07-24 23:48:24.150327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:53.689 [2024-07-24 23:48:24.150348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:53.689 [2024-07-24 23:48:24.150352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.689 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:53.689 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:10:53.689 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:53.689 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:53.689 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:53.689 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.689 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:53.689 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.689 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:53.689 [2024-07-24 23:48:24.291437] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.689 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.689 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:53.689 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.946 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:53.946 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.946 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:53.946 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:53.946 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.946 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:53.946 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.946 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.946 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.946 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:53.946 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.946 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:53.946 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:53.946 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:53.946 [2024-07-24 23:48:24.342455] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:53.947 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:53.947 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:53.947 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:53.947 23:48:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:56.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:07.576 rmmod nvme_tcp 00:11:07.576 rmmod nvme_fabrics 00:11:07.576 rmmod nvme_keyring 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 3330190 ']' 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 3330190 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3330190 ']' 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 3330190 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:07.576 23:48:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3330190 00:11:07.576 23:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:07.576 23:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:07.576 23:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3330190' 00:11:07.576 killing process with pid 3330190 00:11:07.576 23:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 3330190 00:11:07.576 23:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 3330190 00:11:07.833 23:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:07.833 23:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:07.834 23:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:07.834 23:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:07.834 23:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:07.834 23:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:07.834 23:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:07.834 23:48:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.734 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:09.992 00:11:09.992 real 0m18.762s 00:11:09.992 user 0m55.960s 00:11:09.992 sys 0m3.340s 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:09.992 ************************************ 00:11:09.992 END TEST nvmf_connect_disconnect 00:11:09.992 ************************************ 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:09.992 ************************************ 00:11:09.992 START TEST nvmf_multitarget 00:11:09.992 ************************************ 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:09.992 * Looking for test storage... 00:11:09.992 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.992 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:09.993 23:48:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:12.521 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:12.521 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:12.521 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:12.521 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:12.521 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:12.522 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:12.522 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:12.522 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:12.522 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:12.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:11:12.522 00:11:12.522 --- 10.0.0.2 ping statistics --- 00:11:12.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.522 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:12.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:11:12.522 00:11:12.522 --- 10.0.0.1 ping statistics --- 00:11:12.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.522 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:12.522 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:12.523 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:12.523 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:12.523 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:12.523 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=3333944 00:11:12.523 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:12.523 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 3333944 00:11:12.523 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 3333944 ']' 00:11:12.523 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.523 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:12.523 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.523 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:12.523 23:48:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:12.523 [2024-07-24 23:48:42.787730] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:11:12.523 [2024-07-24 23:48:42.787809] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.523 EAL: No free 2048 kB hugepages reported on node 1 00:11:12.523 [2024-07-24 23:48:42.853657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.523 [2024-07-24 23:48:42.963995] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.523 [2024-07-24 23:48:42.964068] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.523 [2024-07-24 23:48:42.964096] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:12.523 [2024-07-24 23:48:42.964108] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:12.523 [2024-07-24 23:48:42.964118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.523 [2024-07-24 23:48:42.964203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:12.523 [2024-07-24 23:48:42.964269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:12.523 [2024-07-24 23:48:42.964334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:12.523 [2024-07-24 23:48:42.964337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.523 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:12.523 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:11:12.523 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:12.523 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:12.523 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:12.523 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:12.523 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:12.523 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:12.523 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:12.780 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:12.780 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:12.780 "nvmf_tgt_1" 00:11:12.780 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:13.037 "nvmf_tgt_2" 00:11:13.037 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:13.037 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:13.037 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:13.037 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:13.293 true 00:11:13.293 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:13.293 true 00:11:13.293 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:13.293 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:13.551 rmmod nvme_tcp 00:11:13.551 rmmod nvme_fabrics 00:11:13.551 rmmod nvme_keyring 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 3333944 ']' 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 3333944 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 3333944 ']' 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 3333944 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3333944 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3333944' 00:11:13.551 killing process with pid 3333944 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 3333944 00:11:13.551 23:48:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 3333944 00:11:13.809 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:13.809 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:13.809 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:13.809 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:13.809 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:13.809 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.809 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:13.809 23:48:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.736 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:15.736 00:11:15.736 real 0m5.922s 00:11:15.736 user 0m6.605s 00:11:15.736 sys 0m1.964s 00:11:15.736 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:15.736 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:15.736 ************************************ 00:11:15.736 END TEST nvmf_multitarget 00:11:15.736 ************************************ 00:11:15.736 23:48:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:15.736 23:48:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:15.736 23:48:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.736 23:48:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:15.993 ************************************ 00:11:15.993 START TEST nvmf_rpc 00:11:15.993 ************************************ 00:11:15.993 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:15.993 * Looking for test storage... 00:11:15.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.993 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.993 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:15.993 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:11:15.994 23:48:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:17.898 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:17.898 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:17.898 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:17.898 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:17.898 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:18.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:11:18.157 00:11:18.157 --- 10.0.0.2 ping statistics --- 00:11:18.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.157 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:11:18.157 00:11:18.157 --- 10.0.0.1 ping statistics --- 00:11:18.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.157 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=3336039 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 3336039 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 3336039 ']' 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:18.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:18.157 23:48:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.157 [2024-07-24 23:48:48.629482] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:11:18.157 [2024-07-24 23:48:48.629577] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:18.157 EAL: No free 2048 kB hugepages reported on node 1 00:11:18.157 [2024-07-24 23:48:48.699381] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:18.414 [2024-07-24 23:48:48.820635] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:18.414 [2024-07-24 23:48:48.820721] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:18.414 [2024-07-24 23:48:48.820738] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:18.414 [2024-07-24 23:48:48.820751] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:18.414 [2024-07-24 23:48:48.820763] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:18.414 [2024-07-24 23:48:48.820834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:18.414 [2024-07-24 23:48:48.820890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:18.415 [2024-07-24 23:48:48.820946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:18.415 [2024-07-24 23:48:48.820949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:18.978 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:18.978 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:11:18.978 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:18.978 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:18.978 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:18.978 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:18.978 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:18.978 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:18.978 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:19.235 "tick_rate": 2700000000, 00:11:19.235 "poll_groups": [ 00:11:19.235 { 00:11:19.235 "name": "nvmf_tgt_poll_group_000", 00:11:19.235 "admin_qpairs": 0, 00:11:19.235 "io_qpairs": 0, 00:11:19.235 "current_admin_qpairs": 0, 00:11:19.235 "current_io_qpairs": 0, 00:11:19.235 "pending_bdev_io": 0, 00:11:19.235 "completed_nvme_io": 0, 00:11:19.235 "transports": [] 00:11:19.235 }, 00:11:19.235 { 00:11:19.235 "name": "nvmf_tgt_poll_group_001", 00:11:19.235 "admin_qpairs": 0, 00:11:19.235 "io_qpairs": 0, 00:11:19.235 "current_admin_qpairs": 0, 00:11:19.235 "current_io_qpairs": 0, 00:11:19.235 "pending_bdev_io": 0, 00:11:19.235 "completed_nvme_io": 0, 00:11:19.235 "transports": [] 00:11:19.235 }, 00:11:19.235 { 00:11:19.235 "name": "nvmf_tgt_poll_group_002", 00:11:19.235 "admin_qpairs": 0, 00:11:19.235 "io_qpairs": 0, 00:11:19.235 "current_admin_qpairs": 0, 00:11:19.235 "current_io_qpairs": 0, 00:11:19.235 "pending_bdev_io": 0, 00:11:19.235 "completed_nvme_io": 0, 00:11:19.235 "transports": [] 00:11:19.235 }, 00:11:19.235 { 00:11:19.235 "name": "nvmf_tgt_poll_group_003", 00:11:19.235 "admin_qpairs": 0, 00:11:19.235 "io_qpairs": 0, 00:11:19.235 "current_admin_qpairs": 0, 00:11:19.235 "current_io_qpairs": 0, 00:11:19.235 "pending_bdev_io": 0, 00:11:19.235 "completed_nvme_io": 0, 00:11:19.235 "transports": [] 00:11:19.235 } 00:11:19.235 ] 00:11:19.235 }' 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.235 [2024-07-24 23:48:49.681285] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.235 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:19.235 "tick_rate": 2700000000, 00:11:19.235 "poll_groups": [ 00:11:19.235 { 00:11:19.235 "name": "nvmf_tgt_poll_group_000", 00:11:19.235 "admin_qpairs": 0, 00:11:19.236 "io_qpairs": 0, 00:11:19.236 "current_admin_qpairs": 0, 00:11:19.236 "current_io_qpairs": 0, 00:11:19.236 "pending_bdev_io": 0, 00:11:19.236 "completed_nvme_io": 0, 00:11:19.236 "transports": [ 00:11:19.236 { 00:11:19.236 "trtype": "TCP" 00:11:19.236 } 00:11:19.236 ] 00:11:19.236 }, 00:11:19.236 { 00:11:19.236 "name": "nvmf_tgt_poll_group_001", 00:11:19.236 "admin_qpairs": 0, 00:11:19.236 "io_qpairs": 0, 00:11:19.236 "current_admin_qpairs": 0, 00:11:19.236 "current_io_qpairs": 0, 00:11:19.236 "pending_bdev_io": 0, 00:11:19.236 "completed_nvme_io": 0, 00:11:19.236 "transports": [ 00:11:19.236 { 00:11:19.236 "trtype": "TCP" 00:11:19.236 } 00:11:19.236 ] 00:11:19.236 }, 00:11:19.236 { 00:11:19.236 "name": "nvmf_tgt_poll_group_002", 00:11:19.236 "admin_qpairs": 0, 00:11:19.236 "io_qpairs": 0, 00:11:19.236 "current_admin_qpairs": 0, 00:11:19.236 "current_io_qpairs": 0, 00:11:19.236 "pending_bdev_io": 0, 00:11:19.236 "completed_nvme_io": 0, 00:11:19.236 "transports": [ 00:11:19.236 { 00:11:19.236 "trtype": "TCP" 00:11:19.236 } 00:11:19.236 ] 00:11:19.236 }, 00:11:19.236 { 00:11:19.236 "name": "nvmf_tgt_poll_group_003", 00:11:19.236 "admin_qpairs": 0, 00:11:19.236 "io_qpairs": 0, 00:11:19.236 "current_admin_qpairs": 0, 00:11:19.236 "current_io_qpairs": 0, 00:11:19.236 "pending_bdev_io": 0, 00:11:19.236 "completed_nvme_io": 0, 00:11:19.236 "transports": [ 00:11:19.236 { 00:11:19.236 "trtype": "TCP" 00:11:19.236 } 00:11:19.236 ] 00:11:19.236 } 00:11:19.236 ] 00:11:19.236 }' 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.236 Malloc1 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.236 [2024-07-24 23:48:49.829008] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:19.236 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:19.493 [2024-07-24 23:48:49.851471] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:19.493 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:19.493 could not add new controller: failed to write to nvme-fabrics device 00:11:19.493 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:19.493 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:19.493 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:19.493 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:19.493 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:19.493 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:19.493 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:19.493 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:19.493 23:48:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:20.057 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:20.057 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:11:20.057 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:20.057 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:20.057 23:48:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:11:21.952 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:21.952 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:21.952 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.952 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:21.952 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.952 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:11:21.952 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:22.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.210 [2024-07-24 23:48:52.641249] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:22.210 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:22.210 could not add new controller: failed to write to nvme-fabrics device 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:22.210 23:48:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.774 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:22.774 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:11:22.774 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.774 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:22.774 23:48:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:25.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.293 [2024-07-24 23:48:55.527412] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:25.293 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.294 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.294 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.294 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:25.294 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:25.294 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:25.294 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:25.294 23:48:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:25.857 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:25.857 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:11:25.857 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:25.857 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:25.857 23:48:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.751 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.751 [2024-07-24 23:48:58.363264] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.009 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.009 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:28.009 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.009 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.009 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.009 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:28.009 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.009 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.009 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.009 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:28.573 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:28.573 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:11:28.573 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.573 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:28.573 23:48:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:11:30.468 23:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:30.468 23:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:30.468 23:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:30.468 23:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:30.468 23:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:30.468 23:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:11:30.468 23:49:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.468 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:30.469 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:30.469 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.469 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.726 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.726 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.726 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.726 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.726 [2024-07-24 23:49:01.089422] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.726 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.726 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:30.726 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.726 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.726 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.726 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:30.726 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.726 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:30.726 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.726 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:31.291 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:31.291 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:11:31.291 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:31.291 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:31.291 23:49:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:11:33.215 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:33.215 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:33.215 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:33.215 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:33.215 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:33.215 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:11:33.215 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:33.472 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.472 [2024-07-24 23:49:03.903904] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.472 23:49:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:34.036 23:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:34.037 23:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:11:34.037 23:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:34.037 23:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:34.037 23:49:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:11:35.930 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:35.930 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:35.930 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.930 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:35.930 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.930 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:11:35.930 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.188 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.188 [2024-07-24 23:49:06.630330] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.188 23:49:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:36.753 23:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:36.753 23:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1196 -- # local i=0 00:11:36.753 23:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:11:36.753 23:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:11:36.753 23:49:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # sleep 2 00:11:38.647 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:11:38.647 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:11:38.647 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # return 0 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1217 -- # local i=0 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # return 0 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 [2024-07-24 23:49:09.405579] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.905 [2024-07-24 23:49:09.453615] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.905 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.906 [2024-07-24 23:49:09.501773] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:38.906 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.163 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.164 [2024-07-24 23:49:09.549924] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.164 [2024-07-24 23:49:09.598094] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:39.164 "tick_rate": 2700000000, 00:11:39.164 "poll_groups": [ 00:11:39.164 { 00:11:39.164 "name": "nvmf_tgt_poll_group_000", 00:11:39.164 "admin_qpairs": 2, 00:11:39.164 "io_qpairs": 84, 00:11:39.164 "current_admin_qpairs": 0, 00:11:39.164 "current_io_qpairs": 0, 00:11:39.164 "pending_bdev_io": 0, 00:11:39.164 "completed_nvme_io": 118, 00:11:39.164 "transports": [ 00:11:39.164 { 00:11:39.164 "trtype": "TCP" 00:11:39.164 } 00:11:39.164 ] 00:11:39.164 }, 00:11:39.164 { 00:11:39.164 "name": "nvmf_tgt_poll_group_001", 00:11:39.164 "admin_qpairs": 2, 00:11:39.164 "io_qpairs": 84, 00:11:39.164 "current_admin_qpairs": 0, 00:11:39.164 "current_io_qpairs": 0, 00:11:39.164 "pending_bdev_io": 0, 00:11:39.164 "completed_nvme_io": 196, 00:11:39.164 "transports": [ 00:11:39.164 { 00:11:39.164 "trtype": "TCP" 00:11:39.164 } 00:11:39.164 ] 00:11:39.164 }, 00:11:39.164 { 00:11:39.164 "name": "nvmf_tgt_poll_group_002", 00:11:39.164 "admin_qpairs": 1, 00:11:39.164 "io_qpairs": 84, 00:11:39.164 "current_admin_qpairs": 0, 00:11:39.164 "current_io_qpairs": 0, 00:11:39.164 "pending_bdev_io": 0, 00:11:39.164 "completed_nvme_io": 140, 00:11:39.164 "transports": [ 00:11:39.164 { 00:11:39.164 "trtype": "TCP" 00:11:39.164 } 00:11:39.164 ] 00:11:39.164 }, 00:11:39.164 { 00:11:39.164 "name": "nvmf_tgt_poll_group_003", 00:11:39.164 "admin_qpairs": 2, 00:11:39.164 "io_qpairs": 84, 00:11:39.164 "current_admin_qpairs": 0, 00:11:39.164 "current_io_qpairs": 0, 00:11:39.164 "pending_bdev_io": 0, 00:11:39.164 "completed_nvme_io": 232, 00:11:39.164 "transports": [ 00:11:39.164 { 00:11:39.164 "trtype": "TCP" 00:11:39.164 } 00:11:39.164 ] 00:11:39.164 } 00:11:39.164 ] 00:11:39.164 }' 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:39.164 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:39.164 rmmod nvme_tcp 00:11:39.164 rmmod nvme_fabrics 00:11:39.421 rmmod nvme_keyring 00:11:39.421 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:39.421 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:11:39.421 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:11:39.421 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 3336039 ']' 00:11:39.421 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 3336039 00:11:39.421 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 3336039 ']' 00:11:39.421 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 3336039 00:11:39.422 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:11:39.422 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:39.422 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3336039 00:11:39.422 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:39.422 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:39.422 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3336039' 00:11:39.422 killing process with pid 3336039 00:11:39.422 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 3336039 00:11:39.422 23:49:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 3336039 00:11:39.679 23:49:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:39.679 23:49:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:39.679 23:49:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:39.679 23:49:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:39.679 23:49:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:39.679 23:49:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.679 23:49:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.679 23:49:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.604 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:41.604 00:11:41.604 real 0m25.808s 00:11:41.604 user 1m24.435s 00:11:41.604 sys 0m4.158s 00:11:41.604 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:41.604 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.604 ************************************ 00:11:41.604 END TEST nvmf_rpc 00:11:41.604 ************************************ 00:11:41.604 23:49:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:41.604 23:49:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:41.604 23:49:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.604 23:49:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:41.862 ************************************ 00:11:41.862 START TEST nvmf_invalid 00:11:41.862 ************************************ 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:41.862 * Looking for test storage... 00:11:41.862 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:11:41.862 23:49:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.759 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:43.760 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:43.760 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:43.760 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:43.760 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:43.760 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:44.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:11:44.019 00:11:44.019 --- 10.0.0.2 ping statistics --- 00:11:44.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.019 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:11:44.019 00:11:44.019 --- 10.0.0.1 ping statistics --- 00:11:44.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.019 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=3340654 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 3340654 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 3340654 ']' 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:44.019 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:44.019 [2024-07-24 23:49:14.494444] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:11:44.019 [2024-07-24 23:49:14.494521] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.019 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.019 [2024-07-24 23:49:14.561645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.276 [2024-07-24 23:49:14.683641] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.276 [2024-07-24 23:49:14.683705] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.276 [2024-07-24 23:49:14.683721] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.276 [2024-07-24 23:49:14.683735] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.276 [2024-07-24 23:49:14.683746] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.276 [2024-07-24 23:49:14.683806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.276 [2024-07-24 23:49:14.683861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.276 [2024-07-24 23:49:14.683914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.276 [2024-07-24 23:49:14.683917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.276 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:44.276 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:11:44.276 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:44.276 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:44.276 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:44.276 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.276 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:44.277 23:49:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24091 00:11:44.533 [2024-07-24 23:49:15.092450] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:44.533 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:44.533 { 00:11:44.533 "nqn": "nqn.2016-06.io.spdk:cnode24091", 00:11:44.533 "tgt_name": "foobar", 00:11:44.533 "method": "nvmf_create_subsystem", 00:11:44.533 "req_id": 1 00:11:44.533 } 00:11:44.533 Got JSON-RPC error response 00:11:44.533 response: 00:11:44.533 { 00:11:44.533 "code": -32603, 00:11:44.533 "message": "Unable to find target foobar" 00:11:44.533 }' 00:11:44.533 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:44.534 { 00:11:44.534 "nqn": "nqn.2016-06.io.spdk:cnode24091", 00:11:44.534 "tgt_name": "foobar", 00:11:44.534 "method": "nvmf_create_subsystem", 00:11:44.534 "req_id": 1 00:11:44.534 } 00:11:44.534 Got JSON-RPC error response 00:11:44.534 response: 00:11:44.534 { 00:11:44.534 "code": -32603, 00:11:44.534 "message": "Unable to find target foobar" 00:11:44.534 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:44.534 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:44.534 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode8180 00:11:44.790 [2024-07-24 23:49:15.337289] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8180: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:44.791 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:44.791 { 00:11:44.791 "nqn": "nqn.2016-06.io.spdk:cnode8180", 00:11:44.791 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:44.791 "method": "nvmf_create_subsystem", 00:11:44.791 "req_id": 1 00:11:44.791 } 00:11:44.791 Got JSON-RPC error response 00:11:44.791 response: 00:11:44.791 { 00:11:44.791 "code": -32602, 00:11:44.791 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:44.791 }' 00:11:44.791 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:44.791 { 00:11:44.791 "nqn": "nqn.2016-06.io.spdk:cnode8180", 00:11:44.791 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:44.791 "method": "nvmf_create_subsystem", 00:11:44.791 "req_id": 1 00:11:44.791 } 00:11:44.791 Got JSON-RPC error response 00:11:44.791 response: 00:11:44.791 { 00:11:44.791 "code": -32602, 00:11:44.791 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:44.791 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:44.791 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:44.791 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode10610 00:11:45.048 [2024-07-24 23:49:15.634275] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10610: invalid model number 'SPDK_Controller' 00:11:45.048 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:45.048 { 00:11:45.048 "nqn": "nqn.2016-06.io.spdk:cnode10610", 00:11:45.048 "model_number": "SPDK_Controller\u001f", 00:11:45.048 "method": "nvmf_create_subsystem", 00:11:45.048 "req_id": 1 00:11:45.048 } 00:11:45.048 Got JSON-RPC error response 00:11:45.048 response: 00:11:45.048 { 00:11:45.048 "code": -32602, 00:11:45.048 "message": "Invalid MN SPDK_Controller\u001f" 00:11:45.048 }' 00:11:45.048 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:45.048 { 00:11:45.048 "nqn": "nqn.2016-06.io.spdk:cnode10610", 00:11:45.048 "model_number": "SPDK_Controller\u001f", 00:11:45.048 "method": "nvmf_create_subsystem", 00:11:45.048 "req_id": 1 00:11:45.048 } 00:11:45.048 Got JSON-RPC error response 00:11:45.048 response: 00:11:45.048 { 00:11:45.048 "code": -32602, 00:11:45.048 "message": "Invalid MN SPDK_Controller\u001f" 00:11:45.048 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:45.048 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:45.048 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:45.048 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:45.048 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:45.048 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:45.048 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:45.048 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.305 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Y == \- ]] 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Ytal'\''//5LmhSeO{s>}+>X' 00:11:45.306 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Ytal'\''//5LmhSeO{s>}+>X' nqn.2016-06.io.spdk:cnode2412 00:11:45.564 [2024-07-24 23:49:15.939311] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2412: invalid serial number 'Ytal'//5LmhSeO{s>}+>X' 00:11:45.564 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:45.564 { 00:11:45.564 "nqn": "nqn.2016-06.io.spdk:cnode2412", 00:11:45.564 "serial_number": "Ytal'\''//5LmhSeO{s>}+>X", 00:11:45.564 "method": "nvmf_create_subsystem", 00:11:45.564 "req_id": 1 00:11:45.564 } 00:11:45.564 Got JSON-RPC error response 00:11:45.564 response: 00:11:45.564 { 00:11:45.564 "code": -32602, 00:11:45.564 "message": "Invalid SN Ytal'\''//5LmhSeO{s>}+>X" 00:11:45.564 }' 00:11:45.564 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:45.564 { 00:11:45.564 "nqn": "nqn.2016-06.io.spdk:cnode2412", 00:11:45.564 "serial_number": "Ytal'//5LmhSeO{s>}+>X", 00:11:45.564 "method": "nvmf_create_subsystem", 00:11:45.564 "req_id": 1 00:11:45.564 } 00:11:45.564 Got JSON-RPC error response 00:11:45.564 response: 00:11:45.564 { 00:11:45.564 "code": -32602, 00:11:45.564 "message": "Invalid SN Ytal'//5LmhSeO{s>}+>X" 00:11:45.564 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:45.564 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:45.564 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.565 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ " == \- ]] 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '"|="7rf9bNT5^*oN\e(Jk(|b7.oyW\xQ/Ug9*@ Q,' 00:11:45.566 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '"|="7rf9bNT5^*oN\e(Jk(|b7.oyW\xQ/Ug9*@ Q,' nqn.2016-06.io.spdk:cnode11061 00:11:45.823 [2024-07-24 23:49:16.312486] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11061: invalid model number '"|="7rf9bNT5^*oN\e(Jk(|b7.oyW\xQ/Ug9*@ Q,' 00:11:45.823 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:45.823 { 00:11:45.823 "nqn": "nqn.2016-06.io.spdk:cnode11061", 00:11:45.823 "model_number": "\"|=\"7rf9bNT5^*oN\\e(Jk(|b7.oyW\\xQ/Ug9*@ Q,", 00:11:45.823 "method": "nvmf_create_subsystem", 00:11:45.823 "req_id": 1 00:11:45.823 } 00:11:45.823 Got JSON-RPC error response 00:11:45.823 response: 00:11:45.823 { 00:11:45.823 "code": -32602, 00:11:45.823 "message": "Invalid MN \"|=\"7rf9bNT5^*oN\\e(Jk(|b7.oyW\\xQ/Ug9*@ Q," 00:11:45.823 }' 00:11:45.823 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:45.823 { 00:11:45.823 "nqn": "nqn.2016-06.io.spdk:cnode11061", 00:11:45.823 "model_number": "\"|=\"7rf9bNT5^*oN\\e(Jk(|b7.oyW\\xQ/Ug9*@ Q,", 00:11:45.823 "method": "nvmf_create_subsystem", 00:11:45.823 "req_id": 1 00:11:45.823 } 00:11:45.823 Got JSON-RPC error response 00:11:45.824 response: 00:11:45.824 { 00:11:45.824 "code": -32602, 00:11:45.824 "message": "Invalid MN \"|=\"7rf9bNT5^*oN\\e(Jk(|b7.oyW\\xQ/Ug9*@ Q," 00:11:45.824 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:45.824 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:46.081 [2024-07-24 23:49:16.557396] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:46.081 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:46.338 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:46.338 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:46.338 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:46.338 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:46.338 23:49:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:46.595 [2024-07-24 23:49:17.063068] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:46.595 23:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:46.595 { 00:11:46.595 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:46.595 "listen_address": { 00:11:46.595 "trtype": "tcp", 00:11:46.595 "traddr": "", 00:11:46.595 "trsvcid": "4421" 00:11:46.595 }, 00:11:46.595 "method": "nvmf_subsystem_remove_listener", 00:11:46.595 "req_id": 1 00:11:46.595 } 00:11:46.595 Got JSON-RPC error response 00:11:46.595 response: 00:11:46.595 { 00:11:46.595 "code": -32602, 00:11:46.595 "message": "Invalid parameters" 00:11:46.595 }' 00:11:46.595 23:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:46.595 { 00:11:46.595 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:46.595 "listen_address": { 00:11:46.595 "trtype": "tcp", 00:11:46.595 "traddr": "", 00:11:46.595 "trsvcid": "4421" 00:11:46.595 }, 00:11:46.595 "method": "nvmf_subsystem_remove_listener", 00:11:46.595 "req_id": 1 00:11:46.595 } 00:11:46.595 Got JSON-RPC error response 00:11:46.595 response: 00:11:46.595 { 00:11:46.595 "code": -32602, 00:11:46.595 "message": "Invalid parameters" 00:11:46.595 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:46.595 23:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4754 -i 0 00:11:46.853 [2024-07-24 23:49:17.307856] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4754: invalid cntlid range [0-65519] 00:11:46.853 23:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:46.853 { 00:11:46.853 "nqn": "nqn.2016-06.io.spdk:cnode4754", 00:11:46.853 "min_cntlid": 0, 00:11:46.853 "method": "nvmf_create_subsystem", 00:11:46.853 "req_id": 1 00:11:46.853 } 00:11:46.853 Got JSON-RPC error response 00:11:46.853 response: 00:11:46.853 { 00:11:46.853 "code": -32602, 00:11:46.853 "message": "Invalid cntlid range [0-65519]" 00:11:46.853 }' 00:11:46.853 23:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:46.853 { 00:11:46.853 "nqn": "nqn.2016-06.io.spdk:cnode4754", 00:11:46.853 "min_cntlid": 0, 00:11:46.853 "method": "nvmf_create_subsystem", 00:11:46.853 "req_id": 1 00:11:46.853 } 00:11:46.853 Got JSON-RPC error response 00:11:46.853 response: 00:11:46.853 { 00:11:46.853 "code": -32602, 00:11:46.853 "message": "Invalid cntlid range [0-65519]" 00:11:46.853 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:46.853 23:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21750 -i 65520 00:11:47.110 [2024-07-24 23:49:17.552665] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21750: invalid cntlid range [65520-65519] 00:11:47.110 23:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:47.110 { 00:11:47.110 "nqn": "nqn.2016-06.io.spdk:cnode21750", 00:11:47.110 "min_cntlid": 65520, 00:11:47.110 "method": "nvmf_create_subsystem", 00:11:47.110 "req_id": 1 00:11:47.110 } 00:11:47.110 Got JSON-RPC error response 00:11:47.110 response: 00:11:47.110 { 00:11:47.110 "code": -32602, 00:11:47.110 "message": "Invalid cntlid range [65520-65519]" 00:11:47.111 }' 00:11:47.111 23:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:47.111 { 00:11:47.111 "nqn": "nqn.2016-06.io.spdk:cnode21750", 00:11:47.111 "min_cntlid": 65520, 00:11:47.111 "method": "nvmf_create_subsystem", 00:11:47.111 "req_id": 1 00:11:47.111 } 00:11:47.111 Got JSON-RPC error response 00:11:47.111 response: 00:11:47.111 { 00:11:47.111 "code": -32602, 00:11:47.111 "message": "Invalid cntlid range [65520-65519]" 00:11:47.111 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:47.111 23:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32188 -I 0 00:11:47.367 [2024-07-24 23:49:17.813547] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32188: invalid cntlid range [1-0] 00:11:47.367 23:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:47.367 { 00:11:47.367 "nqn": "nqn.2016-06.io.spdk:cnode32188", 00:11:47.367 "max_cntlid": 0, 00:11:47.367 "method": "nvmf_create_subsystem", 00:11:47.367 "req_id": 1 00:11:47.367 } 00:11:47.367 Got JSON-RPC error response 00:11:47.367 response: 00:11:47.367 { 00:11:47.367 "code": -32602, 00:11:47.367 "message": "Invalid cntlid range [1-0]" 00:11:47.367 }' 00:11:47.367 23:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:47.367 { 00:11:47.367 "nqn": "nqn.2016-06.io.spdk:cnode32188", 00:11:47.367 "max_cntlid": 0, 00:11:47.367 "method": "nvmf_create_subsystem", 00:11:47.367 "req_id": 1 00:11:47.367 } 00:11:47.367 Got JSON-RPC error response 00:11:47.367 response: 00:11:47.367 { 00:11:47.367 "code": -32602, 00:11:47.367 "message": "Invalid cntlid range [1-0]" 00:11:47.367 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:47.367 23:49:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20728 -I 65520 00:11:47.625 [2024-07-24 23:49:18.058323] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20728: invalid cntlid range [1-65520] 00:11:47.625 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:47.625 { 00:11:47.625 "nqn": "nqn.2016-06.io.spdk:cnode20728", 00:11:47.625 "max_cntlid": 65520, 00:11:47.625 "method": "nvmf_create_subsystem", 00:11:47.625 "req_id": 1 00:11:47.625 } 00:11:47.625 Got JSON-RPC error response 00:11:47.625 response: 00:11:47.625 { 00:11:47.625 "code": -32602, 00:11:47.625 "message": "Invalid cntlid range [1-65520]" 00:11:47.625 }' 00:11:47.625 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:47.625 { 00:11:47.625 "nqn": "nqn.2016-06.io.spdk:cnode20728", 00:11:47.625 "max_cntlid": 65520, 00:11:47.625 "method": "nvmf_create_subsystem", 00:11:47.625 "req_id": 1 00:11:47.625 } 00:11:47.625 Got JSON-RPC error response 00:11:47.625 response: 00:11:47.625 { 00:11:47.625 "code": -32602, 00:11:47.625 "message": "Invalid cntlid range [1-65520]" 00:11:47.625 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:47.625 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode19951 -i 6 -I 5 00:11:47.883 [2024-07-24 23:49:18.303131] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19951: invalid cntlid range [6-5] 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:47.883 { 00:11:47.883 "nqn": "nqn.2016-06.io.spdk:cnode19951", 00:11:47.883 "min_cntlid": 6, 00:11:47.883 "max_cntlid": 5, 00:11:47.883 "method": "nvmf_create_subsystem", 00:11:47.883 "req_id": 1 00:11:47.883 } 00:11:47.883 Got JSON-RPC error response 00:11:47.883 response: 00:11:47.883 { 00:11:47.883 "code": -32602, 00:11:47.883 "message": "Invalid cntlid range [6-5]" 00:11:47.883 }' 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:47.883 { 00:11:47.883 "nqn": "nqn.2016-06.io.spdk:cnode19951", 00:11:47.883 "min_cntlid": 6, 00:11:47.883 "max_cntlid": 5, 00:11:47.883 "method": "nvmf_create_subsystem", 00:11:47.883 "req_id": 1 00:11:47.883 } 00:11:47.883 Got JSON-RPC error response 00:11:47.883 response: 00:11:47.883 { 00:11:47.883 "code": -32602, 00:11:47.883 "message": "Invalid cntlid range [6-5]" 00:11:47.883 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:47.883 { 00:11:47.883 "name": "foobar", 00:11:47.883 "method": "nvmf_delete_target", 00:11:47.883 "req_id": 1 00:11:47.883 } 00:11:47.883 Got JSON-RPC error response 00:11:47.883 response: 00:11:47.883 { 00:11:47.883 "code": -32602, 00:11:47.883 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:47.883 }' 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:47.883 { 00:11:47.883 "name": "foobar", 00:11:47.883 "method": "nvmf_delete_target", 00:11:47.883 "req_id": 1 00:11:47.883 } 00:11:47.883 Got JSON-RPC error response 00:11:47.883 response: 00:11:47.883 { 00:11:47.883 "code": -32602, 00:11:47.883 "message": "The specified target doesn't exist, cannot delete it." 00:11:47.883 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:47.883 rmmod nvme_tcp 00:11:47.883 rmmod nvme_fabrics 00:11:47.883 rmmod nvme_keyring 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 3340654 ']' 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 3340654 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 3340654 ']' 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 3340654 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:47.883 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3340654 00:11:48.141 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:48.141 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:48.141 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3340654' 00:11:48.141 killing process with pid 3340654 00:11:48.141 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 3340654 00:11:48.141 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 3340654 00:11:48.399 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:48.399 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:48.399 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:48.399 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:48.399 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:48.399 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.399 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:48.399 23:49:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.355 00:11:50.355 real 0m8.599s 00:11:50.355 user 0m19.826s 00:11:50.355 sys 0m2.433s 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:50.355 ************************************ 00:11:50.355 END TEST nvmf_invalid 00:11:50.355 ************************************ 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:50.355 ************************************ 00:11:50.355 START TEST nvmf_connect_stress 00:11:50.355 ************************************ 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:50.355 * Looking for test storage... 00:11:50.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.355 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.356 23:49:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:52.882 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:52.882 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:52.882 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.882 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:52.883 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:52.883 23:49:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:52.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:11:52.883 00:11:52.883 --- 10.0.0.2 ping statistics --- 00:11:52.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.883 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:11:52.883 00:11:52.883 --- 10.0.0.1 ping statistics --- 00:11:52.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.883 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=3343173 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 3343173 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 3343173 ']' 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.883 23:49:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.883 [2024-07-24 23:49:23.136667] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:11:52.883 [2024-07-24 23:49:23.136751] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.883 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.883 [2024-07-24 23:49:23.203870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:52.883 [2024-07-24 23:49:23.324011] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.883 [2024-07-24 23:49:23.324072] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.883 [2024-07-24 23:49:23.324088] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.883 [2024-07-24 23:49:23.324102] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.883 [2024-07-24 23:49:23.324114] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.883 [2024-07-24 23:49:23.324199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.883 [2024-07-24 23:49:23.324269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.883 [2024-07-24 23:49:23.324263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.814 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.814 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:53.814 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:53.814 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:53.814 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.814 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.814 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:53.814 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.814 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.814 [2024-07-24 23:49:24.136322] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.814 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.814 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:53.814 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.814 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.814 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.815 [2024-07-24 23:49:24.171264] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.815 NULL1 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3343325 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.815 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.073 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.073 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:54.073 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.073 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.073 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.331 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.331 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:54.331 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.331 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.331 23:49:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.588 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.588 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:54.588 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.588 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.588 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.151 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.151 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:55.151 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.151 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.151 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.408 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.408 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:55.408 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.408 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.408 23:49:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.665 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.665 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:55.665 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.665 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.665 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.922 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.922 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:55.922 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.922 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.922 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.487 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.487 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:56.487 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.487 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.487 23:49:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.743 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.744 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:56.744 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.744 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.744 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.000 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.000 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:57.000 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.000 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.000 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.257 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.257 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:57.257 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.257 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.257 23:49:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.514 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.514 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:57.514 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.514 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.514 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.079 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.079 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:58.079 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.079 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.079 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.337 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.337 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:58.337 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.337 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.337 23:49:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.594 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.594 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:58.594 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.594 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.594 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.851 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.851 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:58.851 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.851 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.851 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.108 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.108 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:59.108 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.108 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.108 23:49:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.672 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.672 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:59.672 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.672 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.672 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.929 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.929 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:11:59.929 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.929 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.929 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.186 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.186 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:12:00.186 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.186 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.186 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.442 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.442 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:12:00.442 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.442 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.442 23:49:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.699 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.699 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:12:00.699 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.699 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.699 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.264 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.264 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:12:01.264 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.264 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.264 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.521 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.521 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:12:01.521 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.521 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.521 23:49:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.778 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.778 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:12:01.778 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.778 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.778 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.035 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.035 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:12:02.035 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.035 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.035 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.598 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.598 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:12:02.598 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.598 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.599 23:49:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.855 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.855 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:12:02.855 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.855 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.855 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.112 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.112 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:12:03.112 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.112 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.112 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.368 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.368 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:12:03.368 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.368 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.369 23:49:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.625 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.625 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:12:03.625 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.625 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.625 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.881 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3343325 00:12:04.138 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3343325) - No such process 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3343325 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:04.138 rmmod nvme_tcp 00:12:04.138 rmmod nvme_fabrics 00:12:04.138 rmmod nvme_keyring 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 3343173 ']' 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 3343173 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 3343173 ']' 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 3343173 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3343173 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3343173' 00:12:04.138 killing process with pid 3343173 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 3343173 00:12:04.138 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 3343173 00:12:04.397 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:04.397 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:04.397 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:04.397 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.397 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:04.397 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.397 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.397 23:49:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.335 23:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:06.335 00:12:06.335 real 0m16.055s 00:12:06.335 user 0m40.571s 00:12:06.335 sys 0m6.010s 00:12:06.335 23:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:06.335 23:49:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.335 ************************************ 00:12:06.335 END TEST nvmf_connect_stress 00:12:06.335 ************************************ 00:12:06.594 23:49:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:06.594 23:49:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:06.594 23:49:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.594 23:49:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:06.594 ************************************ 00:12:06.594 START TEST nvmf_fused_ordering 00:12:06.594 ************************************ 00:12:06.594 23:49:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:06.594 * Looking for test storage... 00:12:06.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:06.594 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:06.595 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.595 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:06.595 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:06.595 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:06.595 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.595 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:06.595 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.595 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:06.595 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:06.595 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:12:06.595 23:49:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:08.495 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:08.495 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:08.495 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:08.496 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:08.496 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:08.496 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:08.754 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:08.754 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:12:08.754 00:12:08.754 --- 10.0.0.2 ping statistics --- 00:12:08.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.754 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:08.754 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:08.754 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:12:08.754 00:12:08.754 --- 10.0.0.1 ping statistics --- 00:12:08.754 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:08.754 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=3346468 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 3346468 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 3346468 ']' 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:08.754 23:49:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:08.754 [2024-07-24 23:49:39.300837] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:12:08.754 [2024-07-24 23:49:39.300921] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.754 EAL: No free 2048 kB hugepages reported on node 1 00:12:09.012 [2024-07-24 23:49:39.370704] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:09.012 [2024-07-24 23:49:39.490775] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.012 [2024-07-24 23:49:39.490840] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.012 [2024-07-24 23:49:39.490857] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.012 [2024-07-24 23:49:39.490870] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.012 [2024-07-24 23:49:39.490881] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.012 [2024-07-24 23:49:39.490921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.945 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:09.945 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:12:09.945 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:09.945 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:09.945 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:09.945 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.945 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.945 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.945 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:09.945 [2024-07-24 23:49:40.274788] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.945 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.945 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:09.945 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.945 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:09.945 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:09.946 [2024-07-24 23:49:40.290952] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:09.946 NULL1 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.946 23:49:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:09.946 [2024-07-24 23:49:40.336452] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:12:09.946 [2024-07-24 23:49:40.336496] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3346621 ] 00:12:09.946 EAL: No free 2048 kB hugepages reported on node 1 00:12:10.203 Attached to nqn.2016-06.io.spdk:cnode1 00:12:10.203 Namespace ID: 1 size: 1GB 00:12:10.203 fused_ordering(0) 00:12:10.203 fused_ordering(1) 00:12:10.203 fused_ordering(2) 00:12:10.203 fused_ordering(3) 00:12:10.203 fused_ordering(4) 00:12:10.203 fused_ordering(5) 00:12:10.203 fused_ordering(6) 00:12:10.203 fused_ordering(7) 00:12:10.203 fused_ordering(8) 00:12:10.204 fused_ordering(9) 00:12:10.204 fused_ordering(10) 00:12:10.204 fused_ordering(11) 00:12:10.204 fused_ordering(12) 00:12:10.204 fused_ordering(13) 00:12:10.204 fused_ordering(14) 00:12:10.204 fused_ordering(15) 00:12:10.204 fused_ordering(16) 00:12:10.204 fused_ordering(17) 00:12:10.204 fused_ordering(18) 00:12:10.204 fused_ordering(19) 00:12:10.204 fused_ordering(20) 00:12:10.204 fused_ordering(21) 00:12:10.204 fused_ordering(22) 00:12:10.204 fused_ordering(23) 00:12:10.204 fused_ordering(24) 00:12:10.204 fused_ordering(25) 00:12:10.204 fused_ordering(26) 00:12:10.204 fused_ordering(27) 00:12:10.204 fused_ordering(28) 00:12:10.204 fused_ordering(29) 00:12:10.204 fused_ordering(30) 00:12:10.204 fused_ordering(31) 00:12:10.204 fused_ordering(32) 00:12:10.204 fused_ordering(33) 00:12:10.204 fused_ordering(34) 00:12:10.204 fused_ordering(35) 00:12:10.204 fused_ordering(36) 00:12:10.204 fused_ordering(37) 00:12:10.204 fused_ordering(38) 00:12:10.204 fused_ordering(39) 00:12:10.204 fused_ordering(40) 00:12:10.204 fused_ordering(41) 00:12:10.204 fused_ordering(42) 00:12:10.204 fused_ordering(43) 00:12:10.204 fused_ordering(44) 00:12:10.204 fused_ordering(45) 00:12:10.204 fused_ordering(46) 00:12:10.204 fused_ordering(47) 00:12:10.204 fused_ordering(48) 00:12:10.204 fused_ordering(49) 00:12:10.204 fused_ordering(50) 00:12:10.204 fused_ordering(51) 00:12:10.204 fused_ordering(52) 00:12:10.204 fused_ordering(53) 00:12:10.204 fused_ordering(54) 00:12:10.204 fused_ordering(55) 00:12:10.204 fused_ordering(56) 00:12:10.204 fused_ordering(57) 00:12:10.204 fused_ordering(58) 00:12:10.204 fused_ordering(59) 00:12:10.204 fused_ordering(60) 00:12:10.204 fused_ordering(61) 00:12:10.204 fused_ordering(62) 00:12:10.204 fused_ordering(63) 00:12:10.204 fused_ordering(64) 00:12:10.204 fused_ordering(65) 00:12:10.204 fused_ordering(66) 00:12:10.204 fused_ordering(67) 00:12:10.204 fused_ordering(68) 00:12:10.204 fused_ordering(69) 00:12:10.204 fused_ordering(70) 00:12:10.204 fused_ordering(71) 00:12:10.204 fused_ordering(72) 00:12:10.204 fused_ordering(73) 00:12:10.204 fused_ordering(74) 00:12:10.204 fused_ordering(75) 00:12:10.204 fused_ordering(76) 00:12:10.204 fused_ordering(77) 00:12:10.204 fused_ordering(78) 00:12:10.204 fused_ordering(79) 00:12:10.204 fused_ordering(80) 00:12:10.204 fused_ordering(81) 00:12:10.204 fused_ordering(82) 00:12:10.204 fused_ordering(83) 00:12:10.204 fused_ordering(84) 00:12:10.204 fused_ordering(85) 00:12:10.204 fused_ordering(86) 00:12:10.204 fused_ordering(87) 00:12:10.204 fused_ordering(88) 00:12:10.204 fused_ordering(89) 00:12:10.204 fused_ordering(90) 00:12:10.204 fused_ordering(91) 00:12:10.204 fused_ordering(92) 00:12:10.204 fused_ordering(93) 00:12:10.204 fused_ordering(94) 00:12:10.204 fused_ordering(95) 00:12:10.204 fused_ordering(96) 00:12:10.204 fused_ordering(97) 00:12:10.204 fused_ordering(98) 00:12:10.204 fused_ordering(99) 00:12:10.204 fused_ordering(100) 00:12:10.204 fused_ordering(101) 00:12:10.204 fused_ordering(102) 00:12:10.204 fused_ordering(103) 00:12:10.204 fused_ordering(104) 00:12:10.204 fused_ordering(105) 00:12:10.204 fused_ordering(106) 00:12:10.204 fused_ordering(107) 00:12:10.204 fused_ordering(108) 00:12:10.204 fused_ordering(109) 00:12:10.204 fused_ordering(110) 00:12:10.204 fused_ordering(111) 00:12:10.204 fused_ordering(112) 00:12:10.204 fused_ordering(113) 00:12:10.204 fused_ordering(114) 00:12:10.204 fused_ordering(115) 00:12:10.204 fused_ordering(116) 00:12:10.204 fused_ordering(117) 00:12:10.204 fused_ordering(118) 00:12:10.204 fused_ordering(119) 00:12:10.204 fused_ordering(120) 00:12:10.204 fused_ordering(121) 00:12:10.204 fused_ordering(122) 00:12:10.204 fused_ordering(123) 00:12:10.204 fused_ordering(124) 00:12:10.204 fused_ordering(125) 00:12:10.204 fused_ordering(126) 00:12:10.204 fused_ordering(127) 00:12:10.204 fused_ordering(128) 00:12:10.204 fused_ordering(129) 00:12:10.204 fused_ordering(130) 00:12:10.204 fused_ordering(131) 00:12:10.204 fused_ordering(132) 00:12:10.204 fused_ordering(133) 00:12:10.204 fused_ordering(134) 00:12:10.204 fused_ordering(135) 00:12:10.204 fused_ordering(136) 00:12:10.204 fused_ordering(137) 00:12:10.204 fused_ordering(138) 00:12:10.204 fused_ordering(139) 00:12:10.204 fused_ordering(140) 00:12:10.204 fused_ordering(141) 00:12:10.204 fused_ordering(142) 00:12:10.204 fused_ordering(143) 00:12:10.204 fused_ordering(144) 00:12:10.204 fused_ordering(145) 00:12:10.204 fused_ordering(146) 00:12:10.204 fused_ordering(147) 00:12:10.204 fused_ordering(148) 00:12:10.204 fused_ordering(149) 00:12:10.204 fused_ordering(150) 00:12:10.204 fused_ordering(151) 00:12:10.204 fused_ordering(152) 00:12:10.204 fused_ordering(153) 00:12:10.204 fused_ordering(154) 00:12:10.204 fused_ordering(155) 00:12:10.204 fused_ordering(156) 00:12:10.205 fused_ordering(157) 00:12:10.205 fused_ordering(158) 00:12:10.205 fused_ordering(159) 00:12:10.205 fused_ordering(160) 00:12:10.205 fused_ordering(161) 00:12:10.205 fused_ordering(162) 00:12:10.205 fused_ordering(163) 00:12:10.205 fused_ordering(164) 00:12:10.205 fused_ordering(165) 00:12:10.205 fused_ordering(166) 00:12:10.205 fused_ordering(167) 00:12:10.205 fused_ordering(168) 00:12:10.205 fused_ordering(169) 00:12:10.205 fused_ordering(170) 00:12:10.205 fused_ordering(171) 00:12:10.205 fused_ordering(172) 00:12:10.205 fused_ordering(173) 00:12:10.205 fused_ordering(174) 00:12:10.205 fused_ordering(175) 00:12:10.205 fused_ordering(176) 00:12:10.205 fused_ordering(177) 00:12:10.205 fused_ordering(178) 00:12:10.205 fused_ordering(179) 00:12:10.205 fused_ordering(180) 00:12:10.205 fused_ordering(181) 00:12:10.205 fused_ordering(182) 00:12:10.205 fused_ordering(183) 00:12:10.205 fused_ordering(184) 00:12:10.205 fused_ordering(185) 00:12:10.205 fused_ordering(186) 00:12:10.205 fused_ordering(187) 00:12:10.205 fused_ordering(188) 00:12:10.205 fused_ordering(189) 00:12:10.205 fused_ordering(190) 00:12:10.205 fused_ordering(191) 00:12:10.205 fused_ordering(192) 00:12:10.205 fused_ordering(193) 00:12:10.205 fused_ordering(194) 00:12:10.205 fused_ordering(195) 00:12:10.205 fused_ordering(196) 00:12:10.205 fused_ordering(197) 00:12:10.205 fused_ordering(198) 00:12:10.205 fused_ordering(199) 00:12:10.205 fused_ordering(200) 00:12:10.205 fused_ordering(201) 00:12:10.205 fused_ordering(202) 00:12:10.205 fused_ordering(203) 00:12:10.205 fused_ordering(204) 00:12:10.205 fused_ordering(205) 00:12:10.769 fused_ordering(206) 00:12:10.769 fused_ordering(207) 00:12:10.769 fused_ordering(208) 00:12:10.769 fused_ordering(209) 00:12:10.769 fused_ordering(210) 00:12:10.769 fused_ordering(211) 00:12:10.769 fused_ordering(212) 00:12:10.769 fused_ordering(213) 00:12:10.769 fused_ordering(214) 00:12:10.769 fused_ordering(215) 00:12:10.769 fused_ordering(216) 00:12:10.769 fused_ordering(217) 00:12:10.769 fused_ordering(218) 00:12:10.769 fused_ordering(219) 00:12:10.769 fused_ordering(220) 00:12:10.769 fused_ordering(221) 00:12:10.769 fused_ordering(222) 00:12:10.769 fused_ordering(223) 00:12:10.769 fused_ordering(224) 00:12:10.769 fused_ordering(225) 00:12:10.769 fused_ordering(226) 00:12:10.769 fused_ordering(227) 00:12:10.769 fused_ordering(228) 00:12:10.769 fused_ordering(229) 00:12:10.769 fused_ordering(230) 00:12:10.769 fused_ordering(231) 00:12:10.769 fused_ordering(232) 00:12:10.769 fused_ordering(233) 00:12:10.769 fused_ordering(234) 00:12:10.769 fused_ordering(235) 00:12:10.769 fused_ordering(236) 00:12:10.769 fused_ordering(237) 00:12:10.769 fused_ordering(238) 00:12:10.769 fused_ordering(239) 00:12:10.769 fused_ordering(240) 00:12:10.769 fused_ordering(241) 00:12:10.769 fused_ordering(242) 00:12:10.769 fused_ordering(243) 00:12:10.769 fused_ordering(244) 00:12:10.769 fused_ordering(245) 00:12:10.769 fused_ordering(246) 00:12:10.769 fused_ordering(247) 00:12:10.769 fused_ordering(248) 00:12:10.769 fused_ordering(249) 00:12:10.769 fused_ordering(250) 00:12:10.769 fused_ordering(251) 00:12:10.769 fused_ordering(252) 00:12:10.769 fused_ordering(253) 00:12:10.769 fused_ordering(254) 00:12:10.769 fused_ordering(255) 00:12:10.769 fused_ordering(256) 00:12:10.769 fused_ordering(257) 00:12:10.769 fused_ordering(258) 00:12:10.769 fused_ordering(259) 00:12:10.769 fused_ordering(260) 00:12:10.769 fused_ordering(261) 00:12:10.769 fused_ordering(262) 00:12:10.769 fused_ordering(263) 00:12:10.769 fused_ordering(264) 00:12:10.769 fused_ordering(265) 00:12:10.769 fused_ordering(266) 00:12:10.769 fused_ordering(267) 00:12:10.769 fused_ordering(268) 00:12:10.769 fused_ordering(269) 00:12:10.769 fused_ordering(270) 00:12:10.769 fused_ordering(271) 00:12:10.769 fused_ordering(272) 00:12:10.769 fused_ordering(273) 00:12:10.769 fused_ordering(274) 00:12:10.769 fused_ordering(275) 00:12:10.769 fused_ordering(276) 00:12:10.769 fused_ordering(277) 00:12:10.769 fused_ordering(278) 00:12:10.769 fused_ordering(279) 00:12:10.769 fused_ordering(280) 00:12:10.769 fused_ordering(281) 00:12:10.769 fused_ordering(282) 00:12:10.769 fused_ordering(283) 00:12:10.769 fused_ordering(284) 00:12:10.769 fused_ordering(285) 00:12:10.769 fused_ordering(286) 00:12:10.769 fused_ordering(287) 00:12:10.769 fused_ordering(288) 00:12:10.769 fused_ordering(289) 00:12:10.769 fused_ordering(290) 00:12:10.769 fused_ordering(291) 00:12:10.769 fused_ordering(292) 00:12:10.769 fused_ordering(293) 00:12:10.769 fused_ordering(294) 00:12:10.769 fused_ordering(295) 00:12:10.769 fused_ordering(296) 00:12:10.769 fused_ordering(297) 00:12:10.769 fused_ordering(298) 00:12:10.769 fused_ordering(299) 00:12:10.769 fused_ordering(300) 00:12:10.769 fused_ordering(301) 00:12:10.769 fused_ordering(302) 00:12:10.769 fused_ordering(303) 00:12:10.770 fused_ordering(304) 00:12:10.770 fused_ordering(305) 00:12:10.770 fused_ordering(306) 00:12:10.770 fused_ordering(307) 00:12:10.770 fused_ordering(308) 00:12:10.770 fused_ordering(309) 00:12:10.770 fused_ordering(310) 00:12:10.770 fused_ordering(311) 00:12:10.770 fused_ordering(312) 00:12:10.770 fused_ordering(313) 00:12:10.770 fused_ordering(314) 00:12:10.770 fused_ordering(315) 00:12:10.770 fused_ordering(316) 00:12:10.770 fused_ordering(317) 00:12:10.770 fused_ordering(318) 00:12:10.770 fused_ordering(319) 00:12:10.770 fused_ordering(320) 00:12:10.770 fused_ordering(321) 00:12:10.770 fused_ordering(322) 00:12:10.770 fused_ordering(323) 00:12:10.770 fused_ordering(324) 00:12:10.770 fused_ordering(325) 00:12:10.770 fused_ordering(326) 00:12:10.770 fused_ordering(327) 00:12:10.770 fused_ordering(328) 00:12:10.770 fused_ordering(329) 00:12:10.770 fused_ordering(330) 00:12:10.770 fused_ordering(331) 00:12:10.770 fused_ordering(332) 00:12:10.770 fused_ordering(333) 00:12:10.770 fused_ordering(334) 00:12:10.770 fused_ordering(335) 00:12:10.770 fused_ordering(336) 00:12:10.770 fused_ordering(337) 00:12:10.770 fused_ordering(338) 00:12:10.770 fused_ordering(339) 00:12:10.770 fused_ordering(340) 00:12:10.770 fused_ordering(341) 00:12:10.770 fused_ordering(342) 00:12:10.770 fused_ordering(343) 00:12:10.770 fused_ordering(344) 00:12:10.770 fused_ordering(345) 00:12:10.770 fused_ordering(346) 00:12:10.770 fused_ordering(347) 00:12:10.770 fused_ordering(348) 00:12:10.770 fused_ordering(349) 00:12:10.770 fused_ordering(350) 00:12:10.770 fused_ordering(351) 00:12:10.770 fused_ordering(352) 00:12:10.770 fused_ordering(353) 00:12:10.770 fused_ordering(354) 00:12:10.770 fused_ordering(355) 00:12:10.770 fused_ordering(356) 00:12:10.770 fused_ordering(357) 00:12:10.770 fused_ordering(358) 00:12:10.770 fused_ordering(359) 00:12:10.770 fused_ordering(360) 00:12:10.770 fused_ordering(361) 00:12:10.770 fused_ordering(362) 00:12:10.770 fused_ordering(363) 00:12:10.770 fused_ordering(364) 00:12:10.770 fused_ordering(365) 00:12:10.770 fused_ordering(366) 00:12:10.770 fused_ordering(367) 00:12:10.770 fused_ordering(368) 00:12:10.770 fused_ordering(369) 00:12:10.770 fused_ordering(370) 00:12:10.770 fused_ordering(371) 00:12:10.770 fused_ordering(372) 00:12:10.770 fused_ordering(373) 00:12:10.770 fused_ordering(374) 00:12:10.770 fused_ordering(375) 00:12:10.770 fused_ordering(376) 00:12:10.770 fused_ordering(377) 00:12:10.770 fused_ordering(378) 00:12:10.770 fused_ordering(379) 00:12:10.770 fused_ordering(380) 00:12:10.770 fused_ordering(381) 00:12:10.770 fused_ordering(382) 00:12:10.770 fused_ordering(383) 00:12:10.770 fused_ordering(384) 00:12:10.770 fused_ordering(385) 00:12:10.770 fused_ordering(386) 00:12:10.770 fused_ordering(387) 00:12:10.770 fused_ordering(388) 00:12:10.770 fused_ordering(389) 00:12:10.770 fused_ordering(390) 00:12:10.770 fused_ordering(391) 00:12:10.770 fused_ordering(392) 00:12:10.770 fused_ordering(393) 00:12:10.770 fused_ordering(394) 00:12:10.770 fused_ordering(395) 00:12:10.770 fused_ordering(396) 00:12:10.770 fused_ordering(397) 00:12:10.770 fused_ordering(398) 00:12:10.770 fused_ordering(399) 00:12:10.770 fused_ordering(400) 00:12:10.770 fused_ordering(401) 00:12:10.770 fused_ordering(402) 00:12:10.770 fused_ordering(403) 00:12:10.770 fused_ordering(404) 00:12:10.770 fused_ordering(405) 00:12:10.770 fused_ordering(406) 00:12:10.770 fused_ordering(407) 00:12:10.770 fused_ordering(408) 00:12:10.770 fused_ordering(409) 00:12:10.770 fused_ordering(410) 00:12:11.333 fused_ordering(411) 00:12:11.333 fused_ordering(412) 00:12:11.333 fused_ordering(413) 00:12:11.333 fused_ordering(414) 00:12:11.333 fused_ordering(415) 00:12:11.333 fused_ordering(416) 00:12:11.333 fused_ordering(417) 00:12:11.333 fused_ordering(418) 00:12:11.333 fused_ordering(419) 00:12:11.333 fused_ordering(420) 00:12:11.333 fused_ordering(421) 00:12:11.333 fused_ordering(422) 00:12:11.333 fused_ordering(423) 00:12:11.333 fused_ordering(424) 00:12:11.333 fused_ordering(425) 00:12:11.333 fused_ordering(426) 00:12:11.333 fused_ordering(427) 00:12:11.333 fused_ordering(428) 00:12:11.333 fused_ordering(429) 00:12:11.333 fused_ordering(430) 00:12:11.333 fused_ordering(431) 00:12:11.333 fused_ordering(432) 00:12:11.333 fused_ordering(433) 00:12:11.333 fused_ordering(434) 00:12:11.333 fused_ordering(435) 00:12:11.333 fused_ordering(436) 00:12:11.333 fused_ordering(437) 00:12:11.333 fused_ordering(438) 00:12:11.333 fused_ordering(439) 00:12:11.333 fused_ordering(440) 00:12:11.333 fused_ordering(441) 00:12:11.333 fused_ordering(442) 00:12:11.333 fused_ordering(443) 00:12:11.333 fused_ordering(444) 00:12:11.333 fused_ordering(445) 00:12:11.333 fused_ordering(446) 00:12:11.333 fused_ordering(447) 00:12:11.333 fused_ordering(448) 00:12:11.333 fused_ordering(449) 00:12:11.333 fused_ordering(450) 00:12:11.333 fused_ordering(451) 00:12:11.333 fused_ordering(452) 00:12:11.333 fused_ordering(453) 00:12:11.334 fused_ordering(454) 00:12:11.334 fused_ordering(455) 00:12:11.334 fused_ordering(456) 00:12:11.334 fused_ordering(457) 00:12:11.334 fused_ordering(458) 00:12:11.334 fused_ordering(459) 00:12:11.334 fused_ordering(460) 00:12:11.334 fused_ordering(461) 00:12:11.334 fused_ordering(462) 00:12:11.334 fused_ordering(463) 00:12:11.334 fused_ordering(464) 00:12:11.334 fused_ordering(465) 00:12:11.334 fused_ordering(466) 00:12:11.334 fused_ordering(467) 00:12:11.334 fused_ordering(468) 00:12:11.334 fused_ordering(469) 00:12:11.334 fused_ordering(470) 00:12:11.334 fused_ordering(471) 00:12:11.334 fused_ordering(472) 00:12:11.334 fused_ordering(473) 00:12:11.334 fused_ordering(474) 00:12:11.334 fused_ordering(475) 00:12:11.334 fused_ordering(476) 00:12:11.334 fused_ordering(477) 00:12:11.334 fused_ordering(478) 00:12:11.334 fused_ordering(479) 00:12:11.334 fused_ordering(480) 00:12:11.334 fused_ordering(481) 00:12:11.334 fused_ordering(482) 00:12:11.334 fused_ordering(483) 00:12:11.334 fused_ordering(484) 00:12:11.334 fused_ordering(485) 00:12:11.334 fused_ordering(486) 00:12:11.334 fused_ordering(487) 00:12:11.334 fused_ordering(488) 00:12:11.334 fused_ordering(489) 00:12:11.334 fused_ordering(490) 00:12:11.334 fused_ordering(491) 00:12:11.334 fused_ordering(492) 00:12:11.334 fused_ordering(493) 00:12:11.334 fused_ordering(494) 00:12:11.334 fused_ordering(495) 00:12:11.334 fused_ordering(496) 00:12:11.334 fused_ordering(497) 00:12:11.334 fused_ordering(498) 00:12:11.334 fused_ordering(499) 00:12:11.334 fused_ordering(500) 00:12:11.334 fused_ordering(501) 00:12:11.334 fused_ordering(502) 00:12:11.334 fused_ordering(503) 00:12:11.334 fused_ordering(504) 00:12:11.334 fused_ordering(505) 00:12:11.334 fused_ordering(506) 00:12:11.334 fused_ordering(507) 00:12:11.334 fused_ordering(508) 00:12:11.334 fused_ordering(509) 00:12:11.334 fused_ordering(510) 00:12:11.334 fused_ordering(511) 00:12:11.334 fused_ordering(512) 00:12:11.334 fused_ordering(513) 00:12:11.334 fused_ordering(514) 00:12:11.334 fused_ordering(515) 00:12:11.334 fused_ordering(516) 00:12:11.334 fused_ordering(517) 00:12:11.334 fused_ordering(518) 00:12:11.334 fused_ordering(519) 00:12:11.334 fused_ordering(520) 00:12:11.334 fused_ordering(521) 00:12:11.334 fused_ordering(522) 00:12:11.334 fused_ordering(523) 00:12:11.334 fused_ordering(524) 00:12:11.334 fused_ordering(525) 00:12:11.334 fused_ordering(526) 00:12:11.334 fused_ordering(527) 00:12:11.334 fused_ordering(528) 00:12:11.334 fused_ordering(529) 00:12:11.334 fused_ordering(530) 00:12:11.334 fused_ordering(531) 00:12:11.334 fused_ordering(532) 00:12:11.334 fused_ordering(533) 00:12:11.334 fused_ordering(534) 00:12:11.334 fused_ordering(535) 00:12:11.334 fused_ordering(536) 00:12:11.334 fused_ordering(537) 00:12:11.334 fused_ordering(538) 00:12:11.334 fused_ordering(539) 00:12:11.334 fused_ordering(540) 00:12:11.334 fused_ordering(541) 00:12:11.334 fused_ordering(542) 00:12:11.334 fused_ordering(543) 00:12:11.334 fused_ordering(544) 00:12:11.334 fused_ordering(545) 00:12:11.334 fused_ordering(546) 00:12:11.334 fused_ordering(547) 00:12:11.334 fused_ordering(548) 00:12:11.334 fused_ordering(549) 00:12:11.334 fused_ordering(550) 00:12:11.334 fused_ordering(551) 00:12:11.334 fused_ordering(552) 00:12:11.334 fused_ordering(553) 00:12:11.334 fused_ordering(554) 00:12:11.334 fused_ordering(555) 00:12:11.334 fused_ordering(556) 00:12:11.334 fused_ordering(557) 00:12:11.334 fused_ordering(558) 00:12:11.334 fused_ordering(559) 00:12:11.334 fused_ordering(560) 00:12:11.334 fused_ordering(561) 00:12:11.334 fused_ordering(562) 00:12:11.334 fused_ordering(563) 00:12:11.334 fused_ordering(564) 00:12:11.334 fused_ordering(565) 00:12:11.334 fused_ordering(566) 00:12:11.334 fused_ordering(567) 00:12:11.334 fused_ordering(568) 00:12:11.334 fused_ordering(569) 00:12:11.334 fused_ordering(570) 00:12:11.334 fused_ordering(571) 00:12:11.334 fused_ordering(572) 00:12:11.334 fused_ordering(573) 00:12:11.334 fused_ordering(574) 00:12:11.334 fused_ordering(575) 00:12:11.334 fused_ordering(576) 00:12:11.334 fused_ordering(577) 00:12:11.334 fused_ordering(578) 00:12:11.334 fused_ordering(579) 00:12:11.334 fused_ordering(580) 00:12:11.334 fused_ordering(581) 00:12:11.334 fused_ordering(582) 00:12:11.334 fused_ordering(583) 00:12:11.334 fused_ordering(584) 00:12:11.334 fused_ordering(585) 00:12:11.334 fused_ordering(586) 00:12:11.334 fused_ordering(587) 00:12:11.334 fused_ordering(588) 00:12:11.334 fused_ordering(589) 00:12:11.334 fused_ordering(590) 00:12:11.334 fused_ordering(591) 00:12:11.334 fused_ordering(592) 00:12:11.334 fused_ordering(593) 00:12:11.334 fused_ordering(594) 00:12:11.334 fused_ordering(595) 00:12:11.334 fused_ordering(596) 00:12:11.334 fused_ordering(597) 00:12:11.334 fused_ordering(598) 00:12:11.334 fused_ordering(599) 00:12:11.334 fused_ordering(600) 00:12:11.334 fused_ordering(601) 00:12:11.334 fused_ordering(602) 00:12:11.334 fused_ordering(603) 00:12:11.334 fused_ordering(604) 00:12:11.334 fused_ordering(605) 00:12:11.334 fused_ordering(606) 00:12:11.334 fused_ordering(607) 00:12:11.334 fused_ordering(608) 00:12:11.334 fused_ordering(609) 00:12:11.334 fused_ordering(610) 00:12:11.334 fused_ordering(611) 00:12:11.334 fused_ordering(612) 00:12:11.334 fused_ordering(613) 00:12:11.334 fused_ordering(614) 00:12:11.334 fused_ordering(615) 00:12:11.899 fused_ordering(616) 00:12:11.899 fused_ordering(617) 00:12:11.899 fused_ordering(618) 00:12:11.899 fused_ordering(619) 00:12:11.899 fused_ordering(620) 00:12:11.899 fused_ordering(621) 00:12:11.899 fused_ordering(622) 00:12:11.899 fused_ordering(623) 00:12:11.899 fused_ordering(624) 00:12:11.899 fused_ordering(625) 00:12:11.899 fused_ordering(626) 00:12:11.899 fused_ordering(627) 00:12:11.899 fused_ordering(628) 00:12:11.899 fused_ordering(629) 00:12:11.899 fused_ordering(630) 00:12:11.899 fused_ordering(631) 00:12:11.899 fused_ordering(632) 00:12:11.899 fused_ordering(633) 00:12:11.899 fused_ordering(634) 00:12:11.899 fused_ordering(635) 00:12:11.899 fused_ordering(636) 00:12:11.899 fused_ordering(637) 00:12:11.899 fused_ordering(638) 00:12:11.899 fused_ordering(639) 00:12:11.899 fused_ordering(640) 00:12:11.899 fused_ordering(641) 00:12:11.899 fused_ordering(642) 00:12:11.899 fused_ordering(643) 00:12:11.899 fused_ordering(644) 00:12:11.899 fused_ordering(645) 00:12:11.899 fused_ordering(646) 00:12:11.899 fused_ordering(647) 00:12:11.899 fused_ordering(648) 00:12:11.899 fused_ordering(649) 00:12:11.899 fused_ordering(650) 00:12:11.899 fused_ordering(651) 00:12:11.899 fused_ordering(652) 00:12:11.899 fused_ordering(653) 00:12:11.899 fused_ordering(654) 00:12:11.899 fused_ordering(655) 00:12:11.899 fused_ordering(656) 00:12:11.899 fused_ordering(657) 00:12:11.899 fused_ordering(658) 00:12:11.899 fused_ordering(659) 00:12:11.899 fused_ordering(660) 00:12:11.899 fused_ordering(661) 00:12:11.899 fused_ordering(662) 00:12:11.899 fused_ordering(663) 00:12:11.899 fused_ordering(664) 00:12:11.899 fused_ordering(665) 00:12:11.899 fused_ordering(666) 00:12:11.899 fused_ordering(667) 00:12:11.899 fused_ordering(668) 00:12:11.899 fused_ordering(669) 00:12:11.899 fused_ordering(670) 00:12:11.899 fused_ordering(671) 00:12:11.899 fused_ordering(672) 00:12:11.899 fused_ordering(673) 00:12:11.899 fused_ordering(674) 00:12:11.899 fused_ordering(675) 00:12:11.899 fused_ordering(676) 00:12:11.899 fused_ordering(677) 00:12:11.899 fused_ordering(678) 00:12:11.899 fused_ordering(679) 00:12:11.899 fused_ordering(680) 00:12:11.899 fused_ordering(681) 00:12:11.899 fused_ordering(682) 00:12:11.899 fused_ordering(683) 00:12:11.899 fused_ordering(684) 00:12:11.899 fused_ordering(685) 00:12:11.899 fused_ordering(686) 00:12:11.899 fused_ordering(687) 00:12:11.899 fused_ordering(688) 00:12:11.899 fused_ordering(689) 00:12:11.899 fused_ordering(690) 00:12:11.899 fused_ordering(691) 00:12:11.899 fused_ordering(692) 00:12:11.899 fused_ordering(693) 00:12:11.899 fused_ordering(694) 00:12:11.899 fused_ordering(695) 00:12:11.899 fused_ordering(696) 00:12:11.899 fused_ordering(697) 00:12:11.899 fused_ordering(698) 00:12:11.899 fused_ordering(699) 00:12:11.899 fused_ordering(700) 00:12:11.899 fused_ordering(701) 00:12:11.899 fused_ordering(702) 00:12:11.899 fused_ordering(703) 00:12:11.899 fused_ordering(704) 00:12:11.899 fused_ordering(705) 00:12:11.899 fused_ordering(706) 00:12:11.899 fused_ordering(707) 00:12:11.899 fused_ordering(708) 00:12:11.899 fused_ordering(709) 00:12:11.899 fused_ordering(710) 00:12:11.899 fused_ordering(711) 00:12:11.899 fused_ordering(712) 00:12:11.899 fused_ordering(713) 00:12:11.899 fused_ordering(714) 00:12:11.899 fused_ordering(715) 00:12:11.899 fused_ordering(716) 00:12:11.899 fused_ordering(717) 00:12:11.899 fused_ordering(718) 00:12:11.899 fused_ordering(719) 00:12:11.899 fused_ordering(720) 00:12:11.899 fused_ordering(721) 00:12:11.899 fused_ordering(722) 00:12:11.899 fused_ordering(723) 00:12:11.899 fused_ordering(724) 00:12:11.899 fused_ordering(725) 00:12:11.899 fused_ordering(726) 00:12:11.899 fused_ordering(727) 00:12:11.899 fused_ordering(728) 00:12:11.899 fused_ordering(729) 00:12:11.899 fused_ordering(730) 00:12:11.899 fused_ordering(731) 00:12:11.899 fused_ordering(732) 00:12:11.899 fused_ordering(733) 00:12:11.899 fused_ordering(734) 00:12:11.899 fused_ordering(735) 00:12:11.899 fused_ordering(736) 00:12:11.899 fused_ordering(737) 00:12:11.899 fused_ordering(738) 00:12:11.899 fused_ordering(739) 00:12:11.899 fused_ordering(740) 00:12:11.899 fused_ordering(741) 00:12:11.899 fused_ordering(742) 00:12:11.899 fused_ordering(743) 00:12:11.899 fused_ordering(744) 00:12:11.899 fused_ordering(745) 00:12:11.899 fused_ordering(746) 00:12:11.899 fused_ordering(747) 00:12:11.899 fused_ordering(748) 00:12:11.899 fused_ordering(749) 00:12:11.899 fused_ordering(750) 00:12:11.899 fused_ordering(751) 00:12:11.899 fused_ordering(752) 00:12:11.899 fused_ordering(753) 00:12:11.899 fused_ordering(754) 00:12:11.899 fused_ordering(755) 00:12:11.899 fused_ordering(756) 00:12:11.899 fused_ordering(757) 00:12:11.899 fused_ordering(758) 00:12:11.899 fused_ordering(759) 00:12:11.899 fused_ordering(760) 00:12:11.899 fused_ordering(761) 00:12:11.899 fused_ordering(762) 00:12:11.899 fused_ordering(763) 00:12:11.899 fused_ordering(764) 00:12:11.899 fused_ordering(765) 00:12:11.899 fused_ordering(766) 00:12:11.899 fused_ordering(767) 00:12:11.899 fused_ordering(768) 00:12:11.899 fused_ordering(769) 00:12:11.899 fused_ordering(770) 00:12:11.899 fused_ordering(771) 00:12:11.899 fused_ordering(772) 00:12:11.899 fused_ordering(773) 00:12:11.899 fused_ordering(774) 00:12:11.899 fused_ordering(775) 00:12:11.899 fused_ordering(776) 00:12:11.899 fused_ordering(777) 00:12:11.899 fused_ordering(778) 00:12:11.899 fused_ordering(779) 00:12:11.899 fused_ordering(780) 00:12:11.899 fused_ordering(781) 00:12:11.899 fused_ordering(782) 00:12:11.899 fused_ordering(783) 00:12:11.899 fused_ordering(784) 00:12:11.899 fused_ordering(785) 00:12:11.899 fused_ordering(786) 00:12:11.899 fused_ordering(787) 00:12:11.899 fused_ordering(788) 00:12:11.899 fused_ordering(789) 00:12:11.899 fused_ordering(790) 00:12:11.899 fused_ordering(791) 00:12:11.899 fused_ordering(792) 00:12:11.899 fused_ordering(793) 00:12:11.899 fused_ordering(794) 00:12:11.899 fused_ordering(795) 00:12:11.899 fused_ordering(796) 00:12:11.899 fused_ordering(797) 00:12:11.899 fused_ordering(798) 00:12:11.899 fused_ordering(799) 00:12:11.899 fused_ordering(800) 00:12:11.899 fused_ordering(801) 00:12:11.899 fused_ordering(802) 00:12:11.899 fused_ordering(803) 00:12:11.899 fused_ordering(804) 00:12:11.899 fused_ordering(805) 00:12:11.899 fused_ordering(806) 00:12:11.899 fused_ordering(807) 00:12:11.899 fused_ordering(808) 00:12:11.899 fused_ordering(809) 00:12:11.899 fused_ordering(810) 00:12:11.899 fused_ordering(811) 00:12:11.899 fused_ordering(812) 00:12:11.899 fused_ordering(813) 00:12:11.899 fused_ordering(814) 00:12:11.899 fused_ordering(815) 00:12:11.899 fused_ordering(816) 00:12:11.899 fused_ordering(817) 00:12:11.899 fused_ordering(818) 00:12:11.899 fused_ordering(819) 00:12:11.899 fused_ordering(820) 00:12:12.831 fused_ordering(821) 00:12:12.831 fused_ordering(822) 00:12:12.831 fused_ordering(823) 00:12:12.831 fused_ordering(824) 00:12:12.831 fused_ordering(825) 00:12:12.831 fused_ordering(826) 00:12:12.831 fused_ordering(827) 00:12:12.831 fused_ordering(828) 00:12:12.831 fused_ordering(829) 00:12:12.831 fused_ordering(830) 00:12:12.831 fused_ordering(831) 00:12:12.831 fused_ordering(832) 00:12:12.831 fused_ordering(833) 00:12:12.831 fused_ordering(834) 00:12:12.831 fused_ordering(835) 00:12:12.831 fused_ordering(836) 00:12:12.831 fused_ordering(837) 00:12:12.831 fused_ordering(838) 00:12:12.831 fused_ordering(839) 00:12:12.831 fused_ordering(840) 00:12:12.831 fused_ordering(841) 00:12:12.831 fused_ordering(842) 00:12:12.831 fused_ordering(843) 00:12:12.831 fused_ordering(844) 00:12:12.831 fused_ordering(845) 00:12:12.831 fused_ordering(846) 00:12:12.831 fused_ordering(847) 00:12:12.831 fused_ordering(848) 00:12:12.831 fused_ordering(849) 00:12:12.831 fused_ordering(850) 00:12:12.831 fused_ordering(851) 00:12:12.831 fused_ordering(852) 00:12:12.831 fused_ordering(853) 00:12:12.831 fused_ordering(854) 00:12:12.831 fused_ordering(855) 00:12:12.831 fused_ordering(856) 00:12:12.831 fused_ordering(857) 00:12:12.832 fused_ordering(858) 00:12:12.832 fused_ordering(859) 00:12:12.832 fused_ordering(860) 00:12:12.832 fused_ordering(861) 00:12:12.832 fused_ordering(862) 00:12:12.832 fused_ordering(863) 00:12:12.832 fused_ordering(864) 00:12:12.832 fused_ordering(865) 00:12:12.832 fused_ordering(866) 00:12:12.832 fused_ordering(867) 00:12:12.832 fused_ordering(868) 00:12:12.832 fused_ordering(869) 00:12:12.832 fused_ordering(870) 00:12:12.832 fused_ordering(871) 00:12:12.832 fused_ordering(872) 00:12:12.832 fused_ordering(873) 00:12:12.832 fused_ordering(874) 00:12:12.832 fused_ordering(875) 00:12:12.832 fused_ordering(876) 00:12:12.832 fused_ordering(877) 00:12:12.832 fused_ordering(878) 00:12:12.832 fused_ordering(879) 00:12:12.832 fused_ordering(880) 00:12:12.832 fused_ordering(881) 00:12:12.832 fused_ordering(882) 00:12:12.832 fused_ordering(883) 00:12:12.832 fused_ordering(884) 00:12:12.832 fused_ordering(885) 00:12:12.832 fused_ordering(886) 00:12:12.832 fused_ordering(887) 00:12:12.832 fused_ordering(888) 00:12:12.832 fused_ordering(889) 00:12:12.832 fused_ordering(890) 00:12:12.832 fused_ordering(891) 00:12:12.832 fused_ordering(892) 00:12:12.832 fused_ordering(893) 00:12:12.832 fused_ordering(894) 00:12:12.832 fused_ordering(895) 00:12:12.832 fused_ordering(896) 00:12:12.832 fused_ordering(897) 00:12:12.832 fused_ordering(898) 00:12:12.832 fused_ordering(899) 00:12:12.832 fused_ordering(900) 00:12:12.832 fused_ordering(901) 00:12:12.832 fused_ordering(902) 00:12:12.832 fused_ordering(903) 00:12:12.832 fused_ordering(904) 00:12:12.832 fused_ordering(905) 00:12:12.832 fused_ordering(906) 00:12:12.832 fused_ordering(907) 00:12:12.832 fused_ordering(908) 00:12:12.832 fused_ordering(909) 00:12:12.832 fused_ordering(910) 00:12:12.832 fused_ordering(911) 00:12:12.832 fused_ordering(912) 00:12:12.832 fused_ordering(913) 00:12:12.832 fused_ordering(914) 00:12:12.832 fused_ordering(915) 00:12:12.832 fused_ordering(916) 00:12:12.832 fused_ordering(917) 00:12:12.832 fused_ordering(918) 00:12:12.832 fused_ordering(919) 00:12:12.832 fused_ordering(920) 00:12:12.832 fused_ordering(921) 00:12:12.832 fused_ordering(922) 00:12:12.832 fused_ordering(923) 00:12:12.832 fused_ordering(924) 00:12:12.832 fused_ordering(925) 00:12:12.832 fused_ordering(926) 00:12:12.832 fused_ordering(927) 00:12:12.832 fused_ordering(928) 00:12:12.832 fused_ordering(929) 00:12:12.832 fused_ordering(930) 00:12:12.832 fused_ordering(931) 00:12:12.832 fused_ordering(932) 00:12:12.832 fused_ordering(933) 00:12:12.832 fused_ordering(934) 00:12:12.832 fused_ordering(935) 00:12:12.832 fused_ordering(936) 00:12:12.832 fused_ordering(937) 00:12:12.832 fused_ordering(938) 00:12:12.832 fused_ordering(939) 00:12:12.832 fused_ordering(940) 00:12:12.832 fused_ordering(941) 00:12:12.832 fused_ordering(942) 00:12:12.832 fused_ordering(943) 00:12:12.832 fused_ordering(944) 00:12:12.832 fused_ordering(945) 00:12:12.832 fused_ordering(946) 00:12:12.832 fused_ordering(947) 00:12:12.832 fused_ordering(948) 00:12:12.832 fused_ordering(949) 00:12:12.832 fused_ordering(950) 00:12:12.832 fused_ordering(951) 00:12:12.832 fused_ordering(952) 00:12:12.832 fused_ordering(953) 00:12:12.832 fused_ordering(954) 00:12:12.832 fused_ordering(955) 00:12:12.832 fused_ordering(956) 00:12:12.832 fused_ordering(957) 00:12:12.832 fused_ordering(958) 00:12:12.832 fused_ordering(959) 00:12:12.832 fused_ordering(960) 00:12:12.832 fused_ordering(961) 00:12:12.832 fused_ordering(962) 00:12:12.832 fused_ordering(963) 00:12:12.832 fused_ordering(964) 00:12:12.832 fused_ordering(965) 00:12:12.832 fused_ordering(966) 00:12:12.832 fused_ordering(967) 00:12:12.832 fused_ordering(968) 00:12:12.832 fused_ordering(969) 00:12:12.832 fused_ordering(970) 00:12:12.832 fused_ordering(971) 00:12:12.832 fused_ordering(972) 00:12:12.832 fused_ordering(973) 00:12:12.832 fused_ordering(974) 00:12:12.832 fused_ordering(975) 00:12:12.832 fused_ordering(976) 00:12:12.832 fused_ordering(977) 00:12:12.832 fused_ordering(978) 00:12:12.832 fused_ordering(979) 00:12:12.832 fused_ordering(980) 00:12:12.832 fused_ordering(981) 00:12:12.832 fused_ordering(982) 00:12:12.832 fused_ordering(983) 00:12:12.832 fused_ordering(984) 00:12:12.832 fused_ordering(985) 00:12:12.832 fused_ordering(986) 00:12:12.832 fused_ordering(987) 00:12:12.832 fused_ordering(988) 00:12:12.832 fused_ordering(989) 00:12:12.832 fused_ordering(990) 00:12:12.832 fused_ordering(991) 00:12:12.832 fused_ordering(992) 00:12:12.832 fused_ordering(993) 00:12:12.832 fused_ordering(994) 00:12:12.832 fused_ordering(995) 00:12:12.832 fused_ordering(996) 00:12:12.832 fused_ordering(997) 00:12:12.832 fused_ordering(998) 00:12:12.832 fused_ordering(999) 00:12:12.832 fused_ordering(1000) 00:12:12.832 fused_ordering(1001) 00:12:12.832 fused_ordering(1002) 00:12:12.832 fused_ordering(1003) 00:12:12.832 fused_ordering(1004) 00:12:12.832 fused_ordering(1005) 00:12:12.832 fused_ordering(1006) 00:12:12.832 fused_ordering(1007) 00:12:12.832 fused_ordering(1008) 00:12:12.832 fused_ordering(1009) 00:12:12.832 fused_ordering(1010) 00:12:12.832 fused_ordering(1011) 00:12:12.832 fused_ordering(1012) 00:12:12.832 fused_ordering(1013) 00:12:12.832 fused_ordering(1014) 00:12:12.832 fused_ordering(1015) 00:12:12.832 fused_ordering(1016) 00:12:12.832 fused_ordering(1017) 00:12:12.832 fused_ordering(1018) 00:12:12.832 fused_ordering(1019) 00:12:12.832 fused_ordering(1020) 00:12:12.832 fused_ordering(1021) 00:12:12.832 fused_ordering(1022) 00:12:12.832 fused_ordering(1023) 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:12.832 rmmod nvme_tcp 00:12:12.832 rmmod nvme_fabrics 00:12:12.832 rmmod nvme_keyring 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 3346468 ']' 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 3346468 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 3346468 ']' 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 3346468 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3346468 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3346468' 00:12:12.832 killing process with pid 3346468 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 3346468 00:12:12.832 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 3346468 00:12:13.090 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:13.090 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:13.090 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:13.090 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:13.090 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:13.090 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:13.090 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:13.090 23:49:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.988 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:14.988 00:12:14.988 real 0m8.562s 00:12:14.988 user 0m6.334s 00:12:14.988 sys 0m3.529s 00:12:14.988 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:14.988 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:14.988 ************************************ 00:12:14.988 END TEST nvmf_fused_ordering 00:12:14.988 ************************************ 00:12:14.988 23:49:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:14.988 23:49:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:14.988 23:49:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:14.988 23:49:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.988 ************************************ 00:12:14.988 START TEST nvmf_ns_masking 00:12:14.988 ************************************ 00:12:14.988 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:15.246 * Looking for test storage... 00:12:15.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:15.246 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=5a7a66c5-fad5-42cb-a7a1-7b27d29b191b 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=848a9612-7591-4ca6-9c1f-5feef5b3a139 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=691e4dca-e811-475b-bae7-89e2199917d9 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:15.247 23:49:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:17.148 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:17.149 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:17.149 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:17.149 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:17.149 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.149 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:17.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:12:17.407 00:12:17.407 --- 10.0.0.2 ping statistics --- 00:12:17.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.407 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:12:17.407 00:12:17.407 --- 10.0.0.1 ping statistics --- 00:12:17.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.407 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:17.407 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:17.408 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=3348939 00:12:17.408 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:17.408 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 3348939 00:12:17.408 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3348939 ']' 00:12:17.408 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.408 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:17.408 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.408 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:17.408 23:49:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:17.408 [2024-07-24 23:49:47.848696] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:12:17.408 [2024-07-24 23:49:47.848791] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.408 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.408 [2024-07-24 23:49:47.913576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.666 [2024-07-24 23:49:48.024507] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.666 [2024-07-24 23:49:48.024570] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.666 [2024-07-24 23:49:48.024598] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.666 [2024-07-24 23:49:48.024609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.666 [2024-07-24 23:49:48.024619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.666 [2024-07-24 23:49:48.024645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.666 23:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.666 23:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:17.666 23:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:17.666 23:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:17.666 23:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:17.666 23:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.666 23:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:17.924 [2024-07-24 23:49:48.434936] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.924 23:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:17.924 23:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:17.924 23:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:18.182 Malloc1 00:12:18.182 23:49:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:18.440 Malloc2 00:12:18.696 23:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:18.954 23:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:19.211 23:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:19.469 [2024-07-24 23:49:49.827239] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:19.469 23:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:19.469 23:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 691e4dca-e811-475b-bae7-89e2199917d9 -a 10.0.0.2 -s 4420 -i 4 00:12:19.469 23:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:19.469 23:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:12:19.469 23:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:19.469 23:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n '' ]] 00:12:19.469 23:49:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:12:21.365 23:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:21.365 23:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:21.365 23:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:21.623 23:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:21.623 23:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:21.623 23:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:12:21.623 23:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:21.623 23:49:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:21.623 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:21.623 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:21.623 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:21.623 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:21.623 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:21.623 [ 0]:0x1 00:12:21.623 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:21.623 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:21.623 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7341596c9c63463385006ad069a287e5 00:12:21.623 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7341596c9c63463385006ad069a287e5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:21.623 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:21.881 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:21.881 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:21.881 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:21.881 [ 0]:0x1 00:12:21.881 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:21.881 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:21.881 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7341596c9c63463385006ad069a287e5 00:12:21.881 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7341596c9c63463385006ad069a287e5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:21.881 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:21.881 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:21.881 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:21.881 [ 1]:0x2 00:12:21.881 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:21.881 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:22.138 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6bbc49d5e4604c3484126fdbd190b4e1 00:12:22.138 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6bbc49d5e4604c3484126fdbd190b4e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:22.138 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:22.138 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:22.413 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:22.413 23:49:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.686 23:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:22.942 23:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:22.942 23:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 691e4dca-e811-475b-bae7-89e2199917d9 -a 10.0.0.2 -s 4420 -i 4 00:12:22.942 23:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:22.942 23:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:12:22.942 23:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.942 23:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 1 ]] 00:12:22.942 23:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=1 00:12:22.942 23:49:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=1 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:25.467 [ 0]:0x2 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6bbc49d5e4604c3484126fdbd190b4e1 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6bbc49d5e4604c3484126fdbd190b4e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:25.467 [ 0]:0x1 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:25.467 23:49:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:25.467 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7341596c9c63463385006ad069a287e5 00:12:25.467 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7341596c9c63463385006ad069a287e5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:25.467 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:25.467 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:25.467 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:25.467 [ 1]:0x2 00:12:25.467 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:25.467 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:25.467 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6bbc49d5e4604c3484126fdbd190b4e1 00:12:25.467 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6bbc49d5e4604c3484126fdbd190b4e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:25.467 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:25.725 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:25.725 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:25.725 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:25.725 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:25.982 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.982 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:25.982 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.982 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:25.982 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:25.982 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:25.982 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:25.982 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:25.982 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:25.982 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:25.982 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:25.982 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:25.982 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:25.983 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:25.983 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:25.983 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:25.983 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:25.983 [ 0]:0x2 00:12:25.983 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:25.983 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:25.983 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6bbc49d5e4604c3484126fdbd190b4e1 00:12:25.983 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6bbc49d5e4604c3484126fdbd190b4e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:25.983 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:25.983 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.983 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:26.240 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:26.240 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 691e4dca-e811-475b-bae7-89e2199917d9 -a 10.0.0.2 -s 4420 -i 4 00:12:26.497 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:26.497 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local i=0 00:12:26.497 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.497 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:12:26.497 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:12:26.497 23:49:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # sleep 2 00:12:28.395 23:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:28.395 23:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:28.395 23:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.395 23:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:12:28.395 23:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.395 23:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # return 0 00:12:28.395 23:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:28.395 23:49:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:28.653 [ 0]:0x1 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=7341596c9c63463385006ad069a287e5 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 7341596c9c63463385006ad069a287e5 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:28.653 [ 1]:0x2 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6bbc49d5e4604c3484126fdbd190b4e1 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6bbc49d5e4604c3484126fdbd190b4e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:28.653 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:29.219 [ 0]:0x2 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6bbc49d5e4604c3484126fdbd190b4e1 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6bbc49d5e4604c3484126fdbd190b4e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:29.219 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:29.477 [2024-07-24 23:49:59.909908] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:29.477 request: 00:12:29.477 { 00:12:29.477 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:29.477 "nsid": 2, 00:12:29.477 "host": "nqn.2016-06.io.spdk:host1", 00:12:29.477 "method": "nvmf_ns_remove_host", 00:12:29.477 "req_id": 1 00:12:29.477 } 00:12:29.477 Got JSON-RPC error response 00:12:29.477 response: 00:12:29.477 { 00:12:29.477 "code": -32602, 00:12:29.477 "message": "Invalid parameters" 00:12:29.477 } 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:29.477 23:49:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:29.477 [ 0]:0x2 00:12:29.477 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:29.477 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:29.477 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=6bbc49d5e4604c3484126fdbd190b4e1 00:12:29.477 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 6bbc49d5e4604c3484126fdbd190b4e1 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:29.477 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:29.477 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.736 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.736 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:29.736 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3350560 00:12:29.736 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:29.736 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3350560 /var/tmp/host.sock 00:12:29.736 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 3350560 ']' 00:12:29.736 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:29.736 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:29.736 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:29.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:29.736 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:29.736 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:29.736 [2024-07-24 23:50:00.249190] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:12:29.736 [2024-07-24 23:50:00.249308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3350560 ] 00:12:29.736 EAL: No free 2048 kB hugepages reported on node 1 00:12:29.736 [2024-07-24 23:50:00.307755] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:29.996 [2024-07-24 23:50:00.418756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.253 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:30.253 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:30.253 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:30.511 23:50:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:30.768 23:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 5a7a66c5-fad5-42cb-a7a1-7b27d29b191b 00:12:30.768 23:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:30.768 23:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5A7A66C5FAD542CBA7A17B27D29B191B -i 00:12:31.025 23:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 848a9612-7591-4ca6-9c1f-5feef5b3a139 00:12:31.025 23:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:31.025 23:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 848A961275914CA69C1F5FEEF5B3A139 -i 00:12:31.282 23:50:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:31.539 23:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:31.796 23:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:31.796 23:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:32.361 nvme0n1 00:12:32.361 23:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:32.361 23:50:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:32.617 nvme1n2 00:12:32.617 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:32.617 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:32.617 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:32.617 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:32.617 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:32.874 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:32.874 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:32.874 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:32.874 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:33.131 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 5a7a66c5-fad5-42cb-a7a1-7b27d29b191b == \5\a\7\a\6\6\c\5\-\f\a\d\5\-\4\2\c\b\-\a\7\a\1\-\7\b\2\7\d\2\9\b\1\9\1\b ]] 00:12:33.131 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:33.131 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:33.131 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:33.388 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 848a9612-7591-4ca6-9c1f-5feef5b3a139 == \8\4\8\a\9\6\1\2\-\7\5\9\1\-\4\c\a\6\-\9\c\1\f\-\5\f\e\e\f\5\b\3\a\1\3\9 ]] 00:12:33.388 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 3350560 00:12:33.388 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3350560 ']' 00:12:33.388 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3350560 00:12:33.388 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:33.388 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:33.388 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3350560 00:12:33.388 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:33.388 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:33.388 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3350560' 00:12:33.388 killing process with pid 3350560 00:12:33.388 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3350560 00:12:33.388 23:50:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3350560 00:12:33.953 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.210 rmmod nvme_tcp 00:12:34.210 rmmod nvme_fabrics 00:12:34.210 rmmod nvme_keyring 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 3348939 ']' 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 3348939 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 3348939 ']' 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 3348939 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3348939 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3348939' 00:12:34.210 killing process with pid 3348939 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 3348939 00:12:34.210 23:50:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 3348939 00:12:34.468 23:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:34.468 23:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:34.468 23:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:34.468 23:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.468 23:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:34.468 23:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.468 23:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:34.468 23:50:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:36.995 00:12:36.995 real 0m21.497s 00:12:36.995 user 0m28.104s 00:12:36.995 sys 0m4.100s 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:36.995 ************************************ 00:12:36.995 END TEST nvmf_ns_masking 00:12:36.995 ************************************ 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:36.995 ************************************ 00:12:36.995 START TEST nvmf_nvme_cli 00:12:36.995 ************************************ 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:36.995 * Looking for test storage... 00:12:36.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:36.995 23:50:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:38.896 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:38.896 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:38.896 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:38.896 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:38.897 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:38.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:38.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:12:38.897 00:12:38.897 --- 10.0.0.2 ping statistics --- 00:12:38.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.897 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:38.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:38.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:12:38.897 00:12:38.897 --- 10.0.0.1 ping statistics --- 00:12:38.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:38.897 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=3353052 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 3353052 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 3353052 ']' 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:38.897 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:38.897 [2024-07-24 23:50:09.401023] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:12:38.897 [2024-07-24 23:50:09.401105] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:38.897 EAL: No free 2048 kB hugepages reported on node 1 00:12:38.897 [2024-07-24 23:50:09.468886] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.155 [2024-07-24 23:50:09.592235] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.155 [2024-07-24 23:50:09.592301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.155 [2024-07-24 23:50:09.592319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.155 [2024-07-24 23:50:09.592335] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.155 [2024-07-24 23:50:09.592347] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.155 [2024-07-24 23:50:09.592404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.155 [2024-07-24 23:50:09.592457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.155 [2024-07-24 23:50:09.592521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.155 [2024-07-24 23:50:09.592518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.155 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.155 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:39.155 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.155 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:39.155 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:39.155 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.155 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.155 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.155 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:39.155 [2024-07-24 23:50:09.759930] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:39.413 Malloc0 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:39.413 Malloc1 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.413 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:39.414 [2024-07-24 23:50:09.846007] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:39.414 00:12:39.414 Discovery Log Number of Records 2, Generation counter 2 00:12:39.414 =====Discovery Log Entry 0====== 00:12:39.414 trtype: tcp 00:12:39.414 adrfam: ipv4 00:12:39.414 subtype: current discovery subsystem 00:12:39.414 treq: not required 00:12:39.414 portid: 0 00:12:39.414 trsvcid: 4420 00:12:39.414 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:39.414 traddr: 10.0.0.2 00:12:39.414 eflags: explicit discovery connections, duplicate discovery information 00:12:39.414 sectype: none 00:12:39.414 =====Discovery Log Entry 1====== 00:12:39.414 trtype: tcp 00:12:39.414 adrfam: ipv4 00:12:39.414 subtype: nvme subsystem 00:12:39.414 treq: not required 00:12:39.414 portid: 0 00:12:39.414 trsvcid: 4420 00:12:39.414 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:39.414 traddr: 10.0.0.2 00:12:39.414 eflags: none 00:12:39.414 sectype: none 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:39.414 23:50:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.996 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:39.996 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1196 -- # local i=0 00:12:39.996 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1197 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.996 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # [[ -n 2 ]] 00:12:39.996 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # nvme_device_counter=2 00:12:39.996 23:50:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # sleep 2 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # (( i++ <= 15 )) 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # lsblk -l -o NAME,SERIAL 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_devices=2 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # return 0 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:42.519 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:42.520 /dev/nvme0n1 ]] 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:42.520 23:50:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.520 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.520 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1217 -- # local i=0 00:12:42.520 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # lsblk -o NAME,SERIAL 00:12:42.520 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1218 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.520 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # lsblk -l -o NAME,SERIAL 00:12:42.520 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1225 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.520 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # return 0 00:12:42.520 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:42.520 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.520 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.520 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:42.777 rmmod nvme_tcp 00:12:42.777 rmmod nvme_fabrics 00:12:42.777 rmmod nvme_keyring 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 3353052 ']' 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 3353052 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 3353052 ']' 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 3353052 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3353052 00:12:42.777 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:42.778 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:42.778 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3353052' 00:12:42.778 killing process with pid 3353052 00:12:42.778 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 3353052 00:12:42.778 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 3353052 00:12:43.036 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:43.036 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:43.036 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:43.036 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.036 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:43.036 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.036 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.036 23:50:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:45.567 00:12:45.567 real 0m8.439s 00:12:45.567 user 0m16.049s 00:12:45.567 sys 0m2.212s 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:45.567 ************************************ 00:12:45.567 END TEST nvmf_nvme_cli 00:12:45.567 ************************************ 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.567 ************************************ 00:12:45.567 START TEST nvmf_vfio_user 00:12:45.567 ************************************ 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:45.567 * Looking for test storage... 00:12:45.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.567 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3353974 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3353974' 00:12:45.568 Process pid: 3353974 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3353974 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3353974 ']' 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:45.568 23:50:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:45.568 [2024-07-24 23:50:15.742156] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:12:45.568 [2024-07-24 23:50:15.742261] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.568 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.568 [2024-07-24 23:50:15.799478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.568 [2024-07-24 23:50:15.907170] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.568 [2024-07-24 23:50:15.907228] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.568 [2024-07-24 23:50:15.907253] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.568 [2024-07-24 23:50:15.907269] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.568 [2024-07-24 23:50:15.907280] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.568 [2024-07-24 23:50:15.907372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.568 [2024-07-24 23:50:15.907441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.568 [2024-07-24 23:50:15.907539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:45.568 [2024-07-24 23:50:15.907542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.568 23:50:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:45.568 23:50:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:45.568 23:50:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:46.499 23:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:46.756 23:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:46.756 23:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:46.756 23:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:46.756 23:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:46.756 23:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:47.013 Malloc1 00:12:47.013 23:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:47.270 23:50:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:47.526 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:47.783 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:47.783 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:47.783 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:48.041 Malloc2 00:12:48.041 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:48.299 23:50:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:48.556 23:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:48.814 23:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:48.814 23:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:48.814 23:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:48.814 23:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:48.814 23:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:48.814 23:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:48.814 [2024-07-24 23:50:19.365691] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:12:48.814 [2024-07-24 23:50:19.365731] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3354396 ] 00:12:48.814 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.814 [2024-07-24 23:50:19.399463] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:48.814 [2024-07-24 23:50:19.407742] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:48.814 [2024-07-24 23:50:19.407770] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f8593573000 00:12:48.814 [2024-07-24 23:50:19.408734] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:48.814 [2024-07-24 23:50:19.409733] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:48.814 [2024-07-24 23:50:19.410739] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:48.814 [2024-07-24 23:50:19.411744] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:48.814 [2024-07-24 23:50:19.412745] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:48.814 [2024-07-24 23:50:19.413748] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:48.814 [2024-07-24 23:50:19.414753] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:48.814 [2024-07-24 23:50:19.415757] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:48.814 [2024-07-24 23:50:19.416761] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:48.814 [2024-07-24 23:50:19.416780] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f8593568000 00:12:48.814 [2024-07-24 23:50:19.417893] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:49.073 [2024-07-24 23:50:19.437873] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:49.073 [2024-07-24 23:50:19.437912] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:49.073 [2024-07-24 23:50:19.440887] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:49.073 [2024-07-24 23:50:19.440940] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:49.073 [2024-07-24 23:50:19.441029] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:49.073 [2024-07-24 23:50:19.441055] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:49.073 [2024-07-24 23:50:19.441065] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:49.073 [2024-07-24 23:50:19.441881] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:49.073 [2024-07-24 23:50:19.441903] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:49.073 [2024-07-24 23:50:19.441917] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:49.073 [2024-07-24 23:50:19.442891] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:49.073 [2024-07-24 23:50:19.442910] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:49.073 [2024-07-24 23:50:19.442930] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:49.073 [2024-07-24 23:50:19.443896] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:49.073 [2024-07-24 23:50:19.443915] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:49.073 [2024-07-24 23:50:19.444899] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:49.073 [2024-07-24 23:50:19.444917] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:49.073 [2024-07-24 23:50:19.444926] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:49.074 [2024-07-24 23:50:19.444937] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:49.074 [2024-07-24 23:50:19.445047] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:49.074 [2024-07-24 23:50:19.445056] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:49.074 [2024-07-24 23:50:19.445064] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:49.074 [2024-07-24 23:50:19.445908] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:49.074 [2024-07-24 23:50:19.446912] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:49.074 [2024-07-24 23:50:19.447917] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:49.074 [2024-07-24 23:50:19.448911] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:49.074 [2024-07-24 23:50:19.449021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:49.074 [2024-07-24 23:50:19.449930] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:49.074 [2024-07-24 23:50:19.449948] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:49.074 [2024-07-24 23:50:19.449957] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.449979] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:49.074 [2024-07-24 23:50:19.449997] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450022] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:49.074 [2024-07-24 23:50:19.450031] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:49.074 [2024-07-24 23:50:19.450038] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:49.074 [2024-07-24 23:50:19.450057] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:49.074 [2024-07-24 23:50:19.450107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:49.074 [2024-07-24 23:50:19.450126] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:49.074 [2024-07-24 23:50:19.450134] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:49.074 [2024-07-24 23:50:19.450141] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:49.074 [2024-07-24 23:50:19.450149] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:49.074 [2024-07-24 23:50:19.450156] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:49.074 [2024-07-24 23:50:19.450164] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:49.074 [2024-07-24 23:50:19.450171] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450183] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:49.074 [2024-07-24 23:50:19.450232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:49.074 [2024-07-24 23:50:19.450263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.074 [2024-07-24 23:50:19.450278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.074 [2024-07-24 23:50:19.450291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.074 [2024-07-24 23:50:19.450304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:49.074 [2024-07-24 23:50:19.450313] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450331] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:49.074 [2024-07-24 23:50:19.450358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:49.074 [2024-07-24 23:50:19.450369] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:49.074 [2024-07-24 23:50:19.450377] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450402] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450416] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:49.074 [2024-07-24 23:50:19.450428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:49.074 [2024-07-24 23:50:19.450495] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450552] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:49.074 [2024-07-24 23:50:19.450560] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:49.074 [2024-07-24 23:50:19.450566] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:49.074 [2024-07-24 23:50:19.450575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:49.074 [2024-07-24 23:50:19.450608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:49.074 [2024-07-24 23:50:19.450625] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:49.074 [2024-07-24 23:50:19.450639] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450653] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450664] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:49.074 [2024-07-24 23:50:19.450672] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:49.074 [2024-07-24 23:50:19.450678] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:49.074 [2024-07-24 23:50:19.450686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:49.074 [2024-07-24 23:50:19.450711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:49.074 [2024-07-24 23:50:19.450731] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450745] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450757] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:49.074 [2024-07-24 23:50:19.450765] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:49.074 [2024-07-24 23:50:19.450770] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:49.074 [2024-07-24 23:50:19.450779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:49.074 [2024-07-24 23:50:19.450795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:49.074 [2024-07-24 23:50:19.450808] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450844] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450852] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450863] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450871] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:49.074 [2024-07-24 23:50:19.450878] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:49.074 [2024-07-24 23:50:19.450886] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:49.074 [2024-07-24 23:50:19.450913] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:49.075 [2024-07-24 23:50:19.450931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:49.075 [2024-07-24 23:50:19.450950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:49.075 [2024-07-24 23:50:19.450965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:49.075 [2024-07-24 23:50:19.450981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:49.075 [2024-07-24 23:50:19.450992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:49.075 [2024-07-24 23:50:19.451008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:49.075 [2024-07-24 23:50:19.451019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:49.075 [2024-07-24 23:50:19.451041] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:49.075 [2024-07-24 23:50:19.451051] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:49.075 [2024-07-24 23:50:19.451057] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:49.075 [2024-07-24 23:50:19.451062] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:49.075 [2024-07-24 23:50:19.451068] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:49.075 [2024-07-24 23:50:19.451077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:49.075 [2024-07-24 23:50:19.451088] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:49.075 [2024-07-24 23:50:19.451096] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:49.075 [2024-07-24 23:50:19.451102] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:49.075 [2024-07-24 23:50:19.451110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:49.075 [2024-07-24 23:50:19.451121] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:49.075 [2024-07-24 23:50:19.451128] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:49.075 [2024-07-24 23:50:19.451134] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:49.075 [2024-07-24 23:50:19.451142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:49.075 [2024-07-24 23:50:19.451154] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:49.075 [2024-07-24 23:50:19.451162] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:49.075 [2024-07-24 23:50:19.451171] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:49.075 [2024-07-24 23:50:19.451180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:49.075 [2024-07-24 23:50:19.451191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:49.075 [2024-07-24 23:50:19.451211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:49.075 [2024-07-24 23:50:19.451252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:49.075 [2024-07-24 23:50:19.451268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:49.075 ===================================================== 00:12:49.075 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:49.075 ===================================================== 00:12:49.075 Controller Capabilities/Features 00:12:49.075 ================================ 00:12:49.075 Vendor ID: 4e58 00:12:49.075 Subsystem Vendor ID: 4e58 00:12:49.075 Serial Number: SPDK1 00:12:49.075 Model Number: SPDK bdev Controller 00:12:49.075 Firmware Version: 24.09 00:12:49.075 Recommended Arb Burst: 6 00:12:49.075 IEEE OUI Identifier: 8d 6b 50 00:12:49.075 Multi-path I/O 00:12:49.075 May have multiple subsystem ports: Yes 00:12:49.075 May have multiple controllers: Yes 00:12:49.075 Associated with SR-IOV VF: No 00:12:49.075 Max Data Transfer Size: 131072 00:12:49.075 Max Number of Namespaces: 32 00:12:49.075 Max Number of I/O Queues: 127 00:12:49.075 NVMe Specification Version (VS): 1.3 00:12:49.075 NVMe Specification Version (Identify): 1.3 00:12:49.075 Maximum Queue Entries: 256 00:12:49.075 Contiguous Queues Required: Yes 00:12:49.075 Arbitration Mechanisms Supported 00:12:49.075 Weighted Round Robin: Not Supported 00:12:49.075 Vendor Specific: Not Supported 00:12:49.075 Reset Timeout: 15000 ms 00:12:49.075 Doorbell Stride: 4 bytes 00:12:49.075 NVM Subsystem Reset: Not Supported 00:12:49.075 Command Sets Supported 00:12:49.075 NVM Command Set: Supported 00:12:49.075 Boot Partition: Not Supported 00:12:49.075 Memory Page Size Minimum: 4096 bytes 00:12:49.075 Memory Page Size Maximum: 4096 bytes 00:12:49.075 Persistent Memory Region: Not Supported 00:12:49.075 Optional Asynchronous Events Supported 00:12:49.075 Namespace Attribute Notices: Supported 00:12:49.075 Firmware Activation Notices: Not Supported 00:12:49.075 ANA Change Notices: Not Supported 00:12:49.075 PLE Aggregate Log Change Notices: Not Supported 00:12:49.075 LBA Status Info Alert Notices: Not Supported 00:12:49.075 EGE Aggregate Log Change Notices: Not Supported 00:12:49.075 Normal NVM Subsystem Shutdown event: Not Supported 00:12:49.075 Zone Descriptor Change Notices: Not Supported 00:12:49.075 Discovery Log Change Notices: Not Supported 00:12:49.075 Controller Attributes 00:12:49.075 128-bit Host Identifier: Supported 00:12:49.075 Non-Operational Permissive Mode: Not Supported 00:12:49.075 NVM Sets: Not Supported 00:12:49.075 Read Recovery Levels: Not Supported 00:12:49.075 Endurance Groups: Not Supported 00:12:49.075 Predictable Latency Mode: Not Supported 00:12:49.075 Traffic Based Keep ALive: Not Supported 00:12:49.075 Namespace Granularity: Not Supported 00:12:49.075 SQ Associations: Not Supported 00:12:49.075 UUID List: Not Supported 00:12:49.075 Multi-Domain Subsystem: Not Supported 00:12:49.075 Fixed Capacity Management: Not Supported 00:12:49.075 Variable Capacity Management: Not Supported 00:12:49.075 Delete Endurance Group: Not Supported 00:12:49.075 Delete NVM Set: Not Supported 00:12:49.075 Extended LBA Formats Supported: Not Supported 00:12:49.075 Flexible Data Placement Supported: Not Supported 00:12:49.075 00:12:49.075 Controller Memory Buffer Support 00:12:49.075 ================================ 00:12:49.075 Supported: No 00:12:49.075 00:12:49.075 Persistent Memory Region Support 00:12:49.075 ================================ 00:12:49.075 Supported: No 00:12:49.075 00:12:49.075 Admin Command Set Attributes 00:12:49.075 ============================ 00:12:49.075 Security Send/Receive: Not Supported 00:12:49.075 Format NVM: Not Supported 00:12:49.075 Firmware Activate/Download: Not Supported 00:12:49.075 Namespace Management: Not Supported 00:12:49.075 Device Self-Test: Not Supported 00:12:49.075 Directives: Not Supported 00:12:49.075 NVMe-MI: Not Supported 00:12:49.075 Virtualization Management: Not Supported 00:12:49.075 Doorbell Buffer Config: Not Supported 00:12:49.075 Get LBA Status Capability: Not Supported 00:12:49.075 Command & Feature Lockdown Capability: Not Supported 00:12:49.075 Abort Command Limit: 4 00:12:49.075 Async Event Request Limit: 4 00:12:49.075 Number of Firmware Slots: N/A 00:12:49.075 Firmware Slot 1 Read-Only: N/A 00:12:49.075 Firmware Activation Without Reset: N/A 00:12:49.075 Multiple Update Detection Support: N/A 00:12:49.075 Firmware Update Granularity: No Information Provided 00:12:49.075 Per-Namespace SMART Log: No 00:12:49.075 Asymmetric Namespace Access Log Page: Not Supported 00:12:49.075 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:49.075 Command Effects Log Page: Supported 00:12:49.075 Get Log Page Extended Data: Supported 00:12:49.075 Telemetry Log Pages: Not Supported 00:12:49.075 Persistent Event Log Pages: Not Supported 00:12:49.075 Supported Log Pages Log Page: May Support 00:12:49.075 Commands Supported & Effects Log Page: Not Supported 00:12:49.075 Feature Identifiers & Effects Log Page:May Support 00:12:49.075 NVMe-MI Commands & Effects Log Page: May Support 00:12:49.075 Data Area 4 for Telemetry Log: Not Supported 00:12:49.075 Error Log Page Entries Supported: 128 00:12:49.075 Keep Alive: Supported 00:12:49.075 Keep Alive Granularity: 10000 ms 00:12:49.075 00:12:49.075 NVM Command Set Attributes 00:12:49.075 ========================== 00:12:49.075 Submission Queue Entry Size 00:12:49.075 Max: 64 00:12:49.075 Min: 64 00:12:49.075 Completion Queue Entry Size 00:12:49.075 Max: 16 00:12:49.075 Min: 16 00:12:49.075 Number of Namespaces: 32 00:12:49.075 Compare Command: Supported 00:12:49.075 Write Uncorrectable Command: Not Supported 00:12:49.075 Dataset Management Command: Supported 00:12:49.075 Write Zeroes Command: Supported 00:12:49.075 Set Features Save Field: Not Supported 00:12:49.075 Reservations: Not Supported 00:12:49.075 Timestamp: Not Supported 00:12:49.075 Copy: Supported 00:12:49.076 Volatile Write Cache: Present 00:12:49.076 Atomic Write Unit (Normal): 1 00:12:49.076 Atomic Write Unit (PFail): 1 00:12:49.076 Atomic Compare & Write Unit: 1 00:12:49.076 Fused Compare & Write: Supported 00:12:49.076 Scatter-Gather List 00:12:49.076 SGL Command Set: Supported (Dword aligned) 00:12:49.076 SGL Keyed: Not Supported 00:12:49.076 SGL Bit Bucket Descriptor: Not Supported 00:12:49.076 SGL Metadata Pointer: Not Supported 00:12:49.076 Oversized SGL: Not Supported 00:12:49.076 SGL Metadata Address: Not Supported 00:12:49.076 SGL Offset: Not Supported 00:12:49.076 Transport SGL Data Block: Not Supported 00:12:49.076 Replay Protected Memory Block: Not Supported 00:12:49.076 00:12:49.076 Firmware Slot Information 00:12:49.076 ========================= 00:12:49.076 Active slot: 1 00:12:49.076 Slot 1 Firmware Revision: 24.09 00:12:49.076 00:12:49.076 00:12:49.076 Commands Supported and Effects 00:12:49.076 ============================== 00:12:49.076 Admin Commands 00:12:49.076 -------------- 00:12:49.076 Get Log Page (02h): Supported 00:12:49.076 Identify (06h): Supported 00:12:49.076 Abort (08h): Supported 00:12:49.076 Set Features (09h): Supported 00:12:49.076 Get Features (0Ah): Supported 00:12:49.076 Asynchronous Event Request (0Ch): Supported 00:12:49.076 Keep Alive (18h): Supported 00:12:49.076 I/O Commands 00:12:49.076 ------------ 00:12:49.076 Flush (00h): Supported LBA-Change 00:12:49.076 Write (01h): Supported LBA-Change 00:12:49.076 Read (02h): Supported 00:12:49.076 Compare (05h): Supported 00:12:49.076 Write Zeroes (08h): Supported LBA-Change 00:12:49.076 Dataset Management (09h): Supported LBA-Change 00:12:49.076 Copy (19h): Supported LBA-Change 00:12:49.076 00:12:49.076 Error Log 00:12:49.076 ========= 00:12:49.076 00:12:49.076 Arbitration 00:12:49.076 =========== 00:12:49.076 Arbitration Burst: 1 00:12:49.076 00:12:49.076 Power Management 00:12:49.076 ================ 00:12:49.076 Number of Power States: 1 00:12:49.076 Current Power State: Power State #0 00:12:49.076 Power State #0: 00:12:49.076 Max Power: 0.00 W 00:12:49.076 Non-Operational State: Operational 00:12:49.076 Entry Latency: Not Reported 00:12:49.076 Exit Latency: Not Reported 00:12:49.076 Relative Read Throughput: 0 00:12:49.076 Relative Read Latency: 0 00:12:49.076 Relative Write Throughput: 0 00:12:49.076 Relative Write Latency: 0 00:12:49.076 Idle Power: Not Reported 00:12:49.076 Active Power: Not Reported 00:12:49.076 Non-Operational Permissive Mode: Not Supported 00:12:49.076 00:12:49.076 Health Information 00:12:49.076 ================== 00:12:49.076 Critical Warnings: 00:12:49.076 Available Spare Space: OK 00:12:49.076 Temperature: OK 00:12:49.076 Device Reliability: OK 00:12:49.076 Read Only: No 00:12:49.076 Volatile Memory Backup: OK 00:12:49.076 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:49.076 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:49.076 Available Spare: 0% 00:12:49.076 Available Sp[2024-07-24 23:50:19.451406] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:49.076 [2024-07-24 23:50:19.451423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:49.076 [2024-07-24 23:50:19.451467] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:49.076 [2024-07-24 23:50:19.451485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.076 [2024-07-24 23:50:19.451496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.076 [2024-07-24 23:50:19.451507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.076 [2024-07-24 23:50:19.451517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:49.076 [2024-07-24 23:50:19.451941] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:49.076 [2024-07-24 23:50:19.451962] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:49.076 [2024-07-24 23:50:19.452938] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:49.076 [2024-07-24 23:50:19.453024] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:49.076 [2024-07-24 23:50:19.453037] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:49.076 [2024-07-24 23:50:19.453948] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:49.076 [2024-07-24 23:50:19.453971] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:49.076 [2024-07-24 23:50:19.454024] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:49.076 [2024-07-24 23:50:19.458254] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:49.076 are Threshold: 0% 00:12:49.076 Life Percentage Used: 0% 00:12:49.076 Data Units Read: 0 00:12:49.076 Data Units Written: 0 00:12:49.076 Host Read Commands: 0 00:12:49.076 Host Write Commands: 0 00:12:49.076 Controller Busy Time: 0 minutes 00:12:49.076 Power Cycles: 0 00:12:49.076 Power On Hours: 0 hours 00:12:49.076 Unsafe Shutdowns: 0 00:12:49.076 Unrecoverable Media Errors: 0 00:12:49.076 Lifetime Error Log Entries: 0 00:12:49.076 Warning Temperature Time: 0 minutes 00:12:49.076 Critical Temperature Time: 0 minutes 00:12:49.076 00:12:49.076 Number of Queues 00:12:49.076 ================ 00:12:49.076 Number of I/O Submission Queues: 127 00:12:49.076 Number of I/O Completion Queues: 127 00:12:49.076 00:12:49.076 Active Namespaces 00:12:49.076 ================= 00:12:49.076 Namespace ID:1 00:12:49.076 Error Recovery Timeout: Unlimited 00:12:49.076 Command Set Identifier: NVM (00h) 00:12:49.076 Deallocate: Supported 00:12:49.076 Deallocated/Unwritten Error: Not Supported 00:12:49.076 Deallocated Read Value: Unknown 00:12:49.076 Deallocate in Write Zeroes: Not Supported 00:12:49.076 Deallocated Guard Field: 0xFFFF 00:12:49.076 Flush: Supported 00:12:49.076 Reservation: Supported 00:12:49.076 Namespace Sharing Capabilities: Multiple Controllers 00:12:49.076 Size (in LBAs): 131072 (0GiB) 00:12:49.076 Capacity (in LBAs): 131072 (0GiB) 00:12:49.076 Utilization (in LBAs): 131072 (0GiB) 00:12:49.076 NGUID: 11C43792BB1F43ADA807513C8426CD58 00:12:49.076 UUID: 11c43792-bb1f-43ad-a807-513c8426cd58 00:12:49.076 Thin Provisioning: Not Supported 00:12:49.076 Per-NS Atomic Units: Yes 00:12:49.076 Atomic Boundary Size (Normal): 0 00:12:49.076 Atomic Boundary Size (PFail): 0 00:12:49.076 Atomic Boundary Offset: 0 00:12:49.076 Maximum Single Source Range Length: 65535 00:12:49.076 Maximum Copy Length: 65535 00:12:49.076 Maximum Source Range Count: 1 00:12:49.076 NGUID/EUI64 Never Reused: No 00:12:49.076 Namespace Write Protected: No 00:12:49.076 Number of LBA Formats: 1 00:12:49.076 Current LBA Format: LBA Format #00 00:12:49.076 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:49.076 00:12:49.076 23:50:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:49.076 EAL: No free 2048 kB hugepages reported on node 1 00:12:49.334 [2024-07-24 23:50:19.688095] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:54.591 Initializing NVMe Controllers 00:12:54.591 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:54.591 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:54.591 Initialization complete. Launching workers. 00:12:54.591 ======================================================== 00:12:54.591 Latency(us) 00:12:54.591 Device Information : IOPS MiB/s Average min max 00:12:54.591 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33294.80 130.06 3846.25 1195.10 7607.31 00:12:54.591 ======================================================== 00:12:54.591 Total : 33294.80 130.06 3846.25 1195.10 7607.31 00:12:54.591 00:12:54.591 [2024-07-24 23:50:24.713943] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:54.591 23:50:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:54.592 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.592 [2024-07-24 23:50:24.957093] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:59.848 Initializing NVMe Controllers 00:12:59.848 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:59.848 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:59.848 Initialization complete. Launching workers. 00:12:59.848 ======================================================== 00:12:59.848 Latency(us) 00:12:59.848 Device Information : IOPS MiB/s Average min max 00:12:59.848 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.17 62.70 7982.88 5950.00 11972.76 00:12:59.848 ======================================================== 00:12:59.848 Total : 16051.17 62.70 7982.88 5950.00 11972.76 00:12:59.848 00:12:59.848 [2024-07-24 23:50:29.998600] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:59.848 23:50:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:59.848 EAL: No free 2048 kB hugepages reported on node 1 00:12:59.848 [2024-07-24 23:50:30.218746] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:05.172 [2024-07-24 23:50:35.299619] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:05.172 Initializing NVMe Controllers 00:13:05.172 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:05.172 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:05.172 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:05.172 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:05.172 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:05.172 Initialization complete. Launching workers. 00:13:05.172 Starting thread on core 2 00:13:05.172 Starting thread on core 3 00:13:05.172 Starting thread on core 1 00:13:05.172 23:50:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:05.172 EAL: No free 2048 kB hugepages reported on node 1 00:13:05.172 [2024-07-24 23:50:35.601749] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:08.449 [2024-07-24 23:50:38.667448] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:08.449 Initializing NVMe Controllers 00:13:08.449 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:08.449 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:08.449 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:08.449 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:08.449 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:08.449 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:08.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:08.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:08.449 Initialization complete. Launching workers. 00:13:08.449 Starting thread on core 1 with urgent priority queue 00:13:08.449 Starting thread on core 2 with urgent priority queue 00:13:08.449 Starting thread on core 3 with urgent priority queue 00:13:08.449 Starting thread on core 0 with urgent priority queue 00:13:08.449 SPDK bdev Controller (SPDK1 ) core 0: 4452.33 IO/s 22.46 secs/100000 ios 00:13:08.449 SPDK bdev Controller (SPDK1 ) core 1: 5028.33 IO/s 19.89 secs/100000 ios 00:13:08.449 SPDK bdev Controller (SPDK1 ) core 2: 5260.67 IO/s 19.01 secs/100000 ios 00:13:08.449 SPDK bdev Controller (SPDK1 ) core 3: 5464.33 IO/s 18.30 secs/100000 ios 00:13:08.449 ======================================================== 00:13:08.449 00:13:08.449 23:50:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:08.449 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.449 [2024-07-24 23:50:38.967737] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:08.449 Initializing NVMe Controllers 00:13:08.449 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:08.449 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:08.449 Namespace ID: 1 size: 0GB 00:13:08.449 Initialization complete. 00:13:08.449 INFO: using host memory buffer for IO 00:13:08.449 Hello world! 00:13:08.449 [2024-07-24 23:50:39.005355] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:08.449 23:50:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:08.705 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.705 [2024-07-24 23:50:39.303695] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:10.076 Initializing NVMe Controllers 00:13:10.076 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:10.076 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:10.076 Initialization complete. Launching workers. 00:13:10.076 submit (in ns) avg, min, max = 10396.8, 3496.7, 4018095.6 00:13:10.076 complete (in ns) avg, min, max = 25043.4, 2071.1, 6990914.4 00:13:10.076 00:13:10.076 Submit histogram 00:13:10.076 ================ 00:13:10.076 Range in us Cumulative Count 00:13:10.076 3.484 - 3.508: 0.0383% ( 5) 00:13:10.076 3.508 - 3.532: 0.4289% ( 51) 00:13:10.076 3.532 - 3.556: 1.4476% ( 133) 00:13:10.076 3.556 - 3.579: 4.4884% ( 397) 00:13:10.076 3.579 - 3.603: 9.6048% ( 668) 00:13:10.076 3.603 - 3.627: 18.3211% ( 1138) 00:13:10.076 3.627 - 3.650: 28.0867% ( 1275) 00:13:10.076 3.650 - 3.674: 36.9945% ( 1163) 00:13:10.076 3.674 - 3.698: 43.6811% ( 873) 00:13:10.076 3.698 - 3.721: 49.3566% ( 741) 00:13:10.076 3.721 - 3.745: 53.7148% ( 569) 00:13:10.076 3.745 - 3.769: 57.4525% ( 488) 00:13:10.076 3.769 - 3.793: 61.4047% ( 516) 00:13:10.076 3.793 - 3.816: 64.5221% ( 407) 00:13:10.076 3.816 - 3.840: 68.1985% ( 480) 00:13:10.076 3.840 - 3.864: 72.8401% ( 606) 00:13:10.076 3.864 - 3.887: 77.6731% ( 631) 00:13:10.076 3.887 - 3.911: 81.9164% ( 554) 00:13:10.076 3.911 - 3.935: 85.0031% ( 403) 00:13:10.076 3.935 - 3.959: 86.8413% ( 240) 00:13:10.076 3.959 - 3.982: 88.4191% ( 206) 00:13:10.076 3.982 - 4.006: 89.9433% ( 199) 00:13:10.076 4.006 - 4.030: 91.0463% ( 144) 00:13:10.076 4.030 - 4.053: 91.9960% ( 124) 00:13:10.076 4.053 - 4.077: 93.0070% ( 132) 00:13:10.076 4.077 - 4.101: 93.9262% ( 120) 00:13:10.076 4.101 - 4.124: 94.6921% ( 100) 00:13:10.076 4.124 - 4.148: 95.2282% ( 70) 00:13:10.076 4.148 - 4.172: 95.6265% ( 52) 00:13:10.076 4.172 - 4.196: 95.8487% ( 29) 00:13:10.076 4.196 - 4.219: 96.1703% ( 42) 00:13:10.076 4.219 - 4.243: 96.3312% ( 21) 00:13:10.076 4.243 - 4.267: 96.4614% ( 17) 00:13:10.076 4.267 - 4.290: 96.5763% ( 15) 00:13:10.076 4.290 - 4.314: 96.6988% ( 16) 00:13:10.076 4.314 - 4.338: 96.7984% ( 13) 00:13:10.076 4.338 - 4.361: 96.8827% ( 11) 00:13:10.076 4.361 - 4.385: 96.9363% ( 7) 00:13:10.076 4.385 - 4.409: 96.9822% ( 6) 00:13:10.076 4.409 - 4.433: 97.0282% ( 6) 00:13:10.076 4.433 - 4.456: 97.1124% ( 11) 00:13:10.076 4.456 - 4.480: 97.1354% ( 3) 00:13:10.076 4.480 - 4.504: 97.1507% ( 2) 00:13:10.076 4.504 - 4.527: 97.1737% ( 3) 00:13:10.076 4.527 - 4.551: 97.1814% ( 1) 00:13:10.076 4.551 - 4.575: 97.1890% ( 1) 00:13:10.076 4.646 - 4.670: 97.2197% ( 4) 00:13:10.076 4.670 - 4.693: 97.2426% ( 3) 00:13:10.076 4.693 - 4.717: 97.2809% ( 5) 00:13:10.076 4.717 - 4.741: 97.3346% ( 7) 00:13:10.076 4.741 - 4.764: 97.3958% ( 8) 00:13:10.076 4.764 - 4.788: 97.4418% ( 6) 00:13:10.076 4.788 - 4.812: 97.4801% ( 5) 00:13:10.076 4.812 - 4.836: 97.5567% ( 10) 00:13:10.076 4.836 - 4.859: 97.6180% ( 8) 00:13:10.076 4.859 - 4.883: 97.6869% ( 9) 00:13:10.076 4.883 - 4.907: 97.7328% ( 6) 00:13:10.076 4.907 - 4.930: 97.7788% ( 6) 00:13:10.076 4.930 - 4.954: 97.8018% ( 3) 00:13:10.076 4.954 - 4.978: 97.8401% ( 5) 00:13:10.076 4.978 - 5.001: 97.8784% ( 5) 00:13:10.076 5.001 - 5.025: 97.9090% ( 4) 00:13:10.076 5.025 - 5.049: 97.9396% ( 4) 00:13:10.076 5.049 - 5.073: 97.9473% ( 1) 00:13:10.076 5.073 - 5.096: 97.9550% ( 1) 00:13:10.076 5.096 - 5.120: 97.9703% ( 2) 00:13:10.076 5.120 - 5.144: 97.9933% ( 3) 00:13:10.076 5.144 - 5.167: 98.0239% ( 4) 00:13:10.076 5.167 - 5.191: 98.0316% ( 1) 00:13:10.076 5.191 - 5.215: 98.0392% ( 1) 00:13:10.076 5.215 - 5.239: 98.0469% ( 1) 00:13:10.076 5.286 - 5.310: 98.0545% ( 1) 00:13:10.076 5.357 - 5.381: 98.0699% ( 2) 00:13:10.076 5.452 - 5.476: 98.0775% ( 1) 00:13:10.076 5.499 - 5.523: 98.0852% ( 1) 00:13:10.076 5.523 - 5.547: 98.1081% ( 3) 00:13:10.076 5.855 - 5.879: 98.1158% ( 1) 00:13:10.076 5.950 - 5.973: 98.1235% ( 1) 00:13:10.076 6.116 - 6.163: 98.1311% ( 1) 00:13:10.076 6.258 - 6.305: 98.1388% ( 1) 00:13:10.076 6.542 - 6.590: 98.1464% ( 1) 00:13:10.076 6.590 - 6.637: 98.1541% ( 1) 00:13:10.076 6.779 - 6.827: 98.1618% ( 1) 00:13:10.076 6.827 - 6.874: 98.1694% ( 1) 00:13:10.076 6.874 - 6.921: 98.1924% ( 3) 00:13:10.076 6.921 - 6.969: 98.2077% ( 2) 00:13:10.076 7.064 - 7.111: 98.2154% ( 1) 00:13:10.076 7.159 - 7.206: 98.2230% ( 1) 00:13:10.076 7.206 - 7.253: 98.2307% ( 1) 00:13:10.076 7.253 - 7.301: 98.2384% ( 1) 00:13:10.076 7.348 - 7.396: 98.2690% ( 4) 00:13:10.076 7.396 - 7.443: 98.2996% ( 4) 00:13:10.076 7.443 - 7.490: 98.3226% ( 3) 00:13:10.076 7.538 - 7.585: 98.3379% ( 2) 00:13:10.076 7.585 - 7.633: 98.3456% ( 1) 00:13:10.076 7.633 - 7.680: 98.3532% ( 1) 00:13:10.076 7.727 - 7.775: 98.3609% ( 1) 00:13:10.076 7.775 - 7.822: 98.3762% ( 2) 00:13:10.076 7.822 - 7.870: 98.3839% ( 1) 00:13:10.076 7.870 - 7.917: 98.3915% ( 1) 00:13:10.076 7.917 - 7.964: 98.4069% ( 2) 00:13:10.076 7.964 - 8.012: 98.4145% ( 1) 00:13:10.076 8.012 - 8.059: 98.4375% ( 3) 00:13:10.076 8.059 - 8.107: 98.4452% ( 1) 00:13:10.076 8.154 - 8.201: 98.4528% ( 1) 00:13:10.076 8.201 - 8.249: 98.4681% ( 2) 00:13:10.076 8.296 - 8.344: 98.4758% ( 1) 00:13:10.076 8.344 - 8.391: 98.4835% ( 1) 00:13:10.076 8.439 - 8.486: 98.4911% ( 1) 00:13:10.076 8.628 - 8.676: 98.4988% ( 1) 00:13:10.076 8.723 - 8.770: 98.5064% ( 1) 00:13:10.076 8.818 - 8.865: 98.5294% ( 3) 00:13:10.076 8.865 - 8.913: 98.5371% ( 1) 00:13:10.076 8.913 - 8.960: 98.5524% ( 2) 00:13:10.076 8.960 - 9.007: 98.5600% ( 1) 00:13:10.076 9.055 - 9.102: 98.5677% ( 1) 00:13:10.076 9.197 - 9.244: 98.5754% ( 1) 00:13:10.076 9.339 - 9.387: 98.5830% ( 1) 00:13:10.076 9.387 - 9.434: 98.5983% ( 2) 00:13:10.076 9.529 - 9.576: 98.6137% ( 2) 00:13:10.076 9.908 - 9.956: 98.6213% ( 1) 00:13:10.076 10.240 - 10.287: 98.6290% ( 1) 00:13:10.076 10.287 - 10.335: 98.6443% ( 2) 00:13:10.076 10.619 - 10.667: 98.6520% ( 1) 00:13:10.076 11.188 - 11.236: 98.6673% ( 2) 00:13:10.076 11.236 - 11.283: 98.6749% ( 1) 00:13:10.076 11.283 - 11.330: 98.6826% ( 1) 00:13:10.076 11.615 - 11.662: 98.6903% ( 1) 00:13:10.076 11.852 - 11.899: 98.6979% ( 1) 00:13:10.076 11.994 - 12.041: 98.7056% ( 1) 00:13:10.076 12.136 - 12.231: 98.7132% ( 1) 00:13:10.077 12.231 - 12.326: 98.7209% ( 1) 00:13:10.077 12.326 - 12.421: 98.7286% ( 1) 00:13:10.077 12.610 - 12.705: 98.7362% ( 1) 00:13:10.077 12.895 - 12.990: 98.7439% ( 1) 00:13:10.077 13.559 - 13.653: 98.7515% ( 1) 00:13:10.077 13.653 - 13.748: 98.7592% ( 1) 00:13:10.077 13.748 - 13.843: 98.7669% ( 1) 00:13:10.077 14.222 - 14.317: 98.7745% ( 1) 00:13:10.077 14.886 - 14.981: 98.7822% ( 1) 00:13:10.077 17.161 - 17.256: 98.7898% ( 1) 00:13:10.077 17.256 - 17.351: 98.7975% ( 1) 00:13:10.077 17.351 - 17.446: 98.8051% ( 1) 00:13:10.077 17.446 - 17.541: 98.8281% ( 3) 00:13:10.077 17.541 - 17.636: 98.8664% ( 5) 00:13:10.077 17.636 - 17.730: 98.9124% ( 6) 00:13:10.077 17.730 - 17.825: 98.9660% ( 7) 00:13:10.077 17.825 - 17.920: 98.9890% ( 3) 00:13:10.077 17.920 - 18.015: 99.0349% ( 6) 00:13:10.077 18.015 - 18.110: 99.1115% ( 10) 00:13:10.077 18.110 - 18.204: 99.2111% ( 13) 00:13:10.077 18.204 - 18.299: 99.3030% ( 12) 00:13:10.077 18.299 - 18.394: 99.3796% ( 10) 00:13:10.077 18.394 - 18.489: 99.4638% ( 11) 00:13:10.077 18.489 - 18.584: 99.5558% ( 12) 00:13:10.077 18.584 - 18.679: 99.6247% ( 9) 00:13:10.077 18.679 - 18.773: 99.6630% ( 5) 00:13:10.077 18.773 - 18.868: 99.7013% ( 5) 00:13:10.077 18.868 - 18.963: 99.7243% ( 3) 00:13:10.077 18.963 - 19.058: 99.7319% ( 1) 00:13:10.077 19.058 - 19.153: 99.7396% ( 1) 00:13:10.077 19.153 - 19.247: 99.7472% ( 1) 00:13:10.077 19.247 - 19.342: 99.7702% ( 3) 00:13:10.077 19.342 - 19.437: 99.7855% ( 2) 00:13:10.077 20.101 - 20.196: 99.7932% ( 1) 00:13:10.077 21.807 - 21.902: 99.8009% ( 1) 00:13:10.077 22.092 - 22.187: 99.8085% ( 1) 00:13:10.077 22.566 - 22.661: 99.8162% ( 1) 00:13:10.077 22.850 - 22.945: 99.8315% ( 2) 00:13:10.077 28.255 - 28.444: 99.8392% ( 1) 00:13:10.077 3980.705 - 4004.978: 99.9694% ( 17) 00:13:10.077 4004.978 - 4029.250: 100.0000% ( 4) 00:13:10.077 00:13:10.077 Complete histogram 00:13:10.077 ================== 00:13:10.077 Range in us Cumulative Count 00:13:10.077 2.062 - 2.074: 0.1072% ( 14) 00:13:10.077 2.074 - 2.086: 18.0530% ( 2343) 00:13:10.077 2.086 - 2.098: 41.1918% ( 3021) 00:13:10.077 2.098 - 2.110: 43.2828% ( 273) 00:13:10.077 2.110 - 2.121: 51.6544% ( 1093) 00:13:10.077 2.121 - 2.133: 55.6219% ( 518) 00:13:10.077 2.133 - 2.145: 57.9274% ( 301) 00:13:10.077 2.145 - 2.157: 67.9075% ( 1303) 00:13:10.077 2.157 - 2.169: 72.2350% ( 565) 00:13:10.077 2.169 - 2.181: 73.4605% ( 160) 00:13:10.077 2.181 - 2.193: 76.9838% ( 460) 00:13:10.077 2.193 - 2.204: 78.7990% ( 237) 00:13:10.077 2.204 - 2.216: 79.7258% ( 121) 00:13:10.077 2.216 - 2.228: 84.4056% ( 611) 00:13:10.077 2.228 - 2.240: 88.0821% ( 480) 00:13:10.077 2.240 - 2.252: 90.0123% ( 252) 00:13:10.077 2.252 - 2.264: 92.2641% ( 294) 00:13:10.077 2.264 - 2.276: 93.3441% ( 141) 00:13:10.077 2.276 - 2.287: 93.7347% ( 51) 00:13:10.077 2.287 - 2.299: 94.2555% ( 68) 00:13:10.077 2.299 - 2.311: 94.8300% ( 75) 00:13:10.077 2.311 - 2.323: 95.6572% ( 108) 00:13:10.077 2.323 - 2.335: 95.7797% ( 16) 00:13:10.077 2.335 - 2.347: 95.8257% ( 6) 00:13:10.077 2.347 - 2.359: 95.9176% ( 12) 00:13:10.077 2.359 - 2.370: 96.0938% ( 23) 00:13:10.077 2.370 - 2.382: 96.3695% ( 36) 00:13:10.077 2.382 - 2.394: 96.8290% ( 60) 00:13:10.077 2.394 - 2.406: 97.1278% ( 39) 00:13:10.077 2.406 - 2.418: 97.3575% ( 30) 00:13:10.077 2.418 - 2.430: 97.5260% ( 22) 00:13:10.077 2.430 - 2.441: 97.6409% ( 15) 00:13:10.077 2.441 - 2.453: 97.7711% ( 17) 00:13:10.077 2.453 - 2.465: 97.9243% ( 20) 00:13:10.077 2.465 - 2.477: 98.0775% ( 20) 00:13:10.077 2.477 - 2.489: 98.1464% ( 9) 00:13:10.077 2.489 - 2.501: 98.2077% ( 8) 00:13:10.077 2.501 - 2.513: 98.2613% ( 7) 00:13:10.077 2.513 - 2.524: 98.3303% ( 9) 00:13:10.077 2.524 - 2.536: 98.4069% ( 10) 00:13:10.077 2.536 - 2.548: 98.4375% ( 4) 00:13:10.077 2.548 - 2.560: 98.4605% ( 3) 00:13:10.077 2.560 - 2.572: 98.4835% ( 3) 00:13:10.077 2.584 - 2.596: 98.4911% ( 1) 00:13:10.077 2.596 - 2.607: 98.5064% ( 2) 00:13:10.077 2.607 - 2.619: 98.5218% ( 2) 00:13:10.077 2.619 - 2.631: 98.5371% ( 2) 00:13:10.077 2.631 - 2.643: 98.5677% ( 4) 00:13:10.077 2.643 - 2.655: 98.5754% ( 1) 00:13:10.077 2.667 - 2.679: 98.5830% ( 1) 00:13:10.077 2.679 - 2.690: 98.5907% ( 1) 00:13:10.077 2.714 - 2.726: 98.6060% ( 2) 00:13:10.077 2.726 - 2.738: 98.6137% ( 1) 00:13:10.077 2.738 - 2.750: 98.6213% ( 1) 00:13:10.077 2.844 - 2.856: 98.6290% ( 1) 00:13:10.077 2.856 - 2.868: 98.6366% ( 1) 00:13:10.077 3.200 - 3.224: 98.6443% ( 1) 00:13:10.077 3.319 - 3.342: 98.6596% ( 2) 00:13:10.077 3.390 - 3.413: 98.6673% ( 1) 00:13:10.077 3.437 - 3.461: 98.6749% ( 1) 00:13:10.077 3.461 - 3.484: 98.6826% ( 1) 00:13:10.077 3.484 - 3.508: 98.6903% ( 1) 00:13:10.077 3.508 - 3.532: 98.7056% ( 2) 00:13:10.077 3.532 - 3.556: 98.7132% ( 1) 00:13:10.077 3.579 - 3.603: 98.7286% ( 2) 00:13:10.077 3.603 - 3.627: 98.7362% ( 1) 00:13:10.077 3.627 - 3.650: 98.7669% ( 4) 00:13:10.077 3.674 - 3.698: 98.7822% ( 2) 00:13:10.077 3.698 - 3.721: 98.7898% ( 1) 00:13:10.077 3.745 - 3.769: 98.7975% ( 1) 00:13:10.077 3.769 - 3.793: 98.8051% ( 1) 00:13:10.077 3.816 - 3.840: 98.8128% ( 1) 00:13:10.077 3.982 - 4.006: 98.8281% ( 2) 00:13:10.077 4.077 - 4.101: 98.8358% ( 1) 00:13:10.077 4.148 - 4.172: 98.8434% ( 1) 00:13:10.077 4.290 - 4.314: 98.8511% ( 1) 00:13:10.077 5.428 - 5.452: 98.8588% ( 1) 00:13:10.077 5.476 - 5.499: 98.8741% ( 2) 00:13:10.077 5.523 - 5.547: 98.8817% ( 1) 00:13:10.077 5.760 - 5.784: 9[2024-07-24 23:50:40.326908] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:10.077 8.8894% ( 1) 00:13:10.077 6.021 - 6.044: 98.8971% ( 1) 00:13:10.077 6.400 - 6.447: 98.9047% ( 1) 00:13:10.077 6.684 - 6.732: 98.9124% ( 1) 00:13:10.077 7.016 - 7.064: 98.9200% ( 1) 00:13:10.077 8.770 - 8.818: 98.9277% ( 1) 00:13:10.077 10.619 - 10.667: 98.9354% ( 1) 00:13:10.077 15.170 - 15.265: 98.9430% ( 1) 00:13:10.077 15.360 - 15.455: 98.9507% ( 1) 00:13:10.077 15.739 - 15.834: 98.9583% ( 1) 00:13:10.077 15.834 - 15.929: 98.9813% ( 3) 00:13:10.077 15.929 - 16.024: 98.9890% ( 1) 00:13:10.077 16.024 - 16.119: 99.0349% ( 6) 00:13:10.077 16.119 - 16.213: 99.0502% ( 2) 00:13:10.077 16.213 - 16.308: 99.0732% ( 3) 00:13:10.077 16.308 - 16.403: 99.0962% ( 3) 00:13:10.077 16.403 - 16.498: 99.1268% ( 4) 00:13:10.077 16.498 - 16.593: 99.1575% ( 4) 00:13:10.077 16.593 - 16.687: 99.1728% ( 2) 00:13:10.077 16.687 - 16.782: 99.2417% ( 9) 00:13:10.077 16.782 - 16.877: 99.2877% ( 6) 00:13:10.077 16.877 - 16.972: 99.3030% ( 2) 00:13:10.077 16.972 - 17.067: 99.3260% ( 3) 00:13:10.077 17.067 - 17.161: 99.3566% ( 4) 00:13:10.077 17.161 - 17.256: 99.3643% ( 1) 00:13:10.077 17.256 - 17.351: 99.3719% ( 1) 00:13:10.077 17.351 - 17.446: 99.3873% ( 2) 00:13:10.077 17.636 - 17.730: 99.3949% ( 1) 00:13:10.077 17.920 - 18.015: 99.4026% ( 1) 00:13:10.077 18.015 - 18.110: 99.4102% ( 1) 00:13:10.077 18.773 - 18.868: 99.4179% ( 1) 00:13:10.077 20.670 - 20.764: 99.4256% ( 1) 00:13:10.077 25.790 - 25.979: 99.4332% ( 1) 00:13:10.077 1025.517 - 1031.585: 99.4409% ( 1) 00:13:10.077 3980.705 - 4004.978: 99.9387% ( 65) 00:13:10.077 4004.978 - 4029.250: 99.9847% ( 6) 00:13:10.077 5995.330 - 6019.603: 99.9923% ( 1) 00:13:10.077 6990.507 - 7039.052: 100.0000% ( 1) 00:13:10.077 00:13:10.077 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:10.077 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:10.077 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:10.077 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:10.077 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:10.077 [ 00:13:10.077 { 00:13:10.077 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:10.077 "subtype": "Discovery", 00:13:10.077 "listen_addresses": [], 00:13:10.077 "allow_any_host": true, 00:13:10.077 "hosts": [] 00:13:10.077 }, 00:13:10.078 { 00:13:10.078 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:10.078 "subtype": "NVMe", 00:13:10.078 "listen_addresses": [ 00:13:10.078 { 00:13:10.078 "trtype": "VFIOUSER", 00:13:10.078 "adrfam": "IPv4", 00:13:10.078 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:10.078 "trsvcid": "0" 00:13:10.078 } 00:13:10.078 ], 00:13:10.078 "allow_any_host": true, 00:13:10.078 "hosts": [], 00:13:10.078 "serial_number": "SPDK1", 00:13:10.078 "model_number": "SPDK bdev Controller", 00:13:10.078 "max_namespaces": 32, 00:13:10.078 "min_cntlid": 1, 00:13:10.078 "max_cntlid": 65519, 00:13:10.078 "namespaces": [ 00:13:10.078 { 00:13:10.078 "nsid": 1, 00:13:10.078 "bdev_name": "Malloc1", 00:13:10.078 "name": "Malloc1", 00:13:10.078 "nguid": "11C43792BB1F43ADA807513C8426CD58", 00:13:10.078 "uuid": "11c43792-bb1f-43ad-a807-513c8426cd58" 00:13:10.078 } 00:13:10.078 ] 00:13:10.078 }, 00:13:10.078 { 00:13:10.078 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:10.078 "subtype": "NVMe", 00:13:10.078 "listen_addresses": [ 00:13:10.078 { 00:13:10.078 "trtype": "VFIOUSER", 00:13:10.078 "adrfam": "IPv4", 00:13:10.078 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:10.078 "trsvcid": "0" 00:13:10.078 } 00:13:10.078 ], 00:13:10.078 "allow_any_host": true, 00:13:10.078 "hosts": [], 00:13:10.078 "serial_number": "SPDK2", 00:13:10.078 "model_number": "SPDK bdev Controller", 00:13:10.078 "max_namespaces": 32, 00:13:10.078 "min_cntlid": 1, 00:13:10.078 "max_cntlid": 65519, 00:13:10.078 "namespaces": [ 00:13:10.078 { 00:13:10.078 "nsid": 1, 00:13:10.078 "bdev_name": "Malloc2", 00:13:10.078 "name": "Malloc2", 00:13:10.078 "nguid": "4B2CBEA2D3974C23A51AFA242345D785", 00:13:10.078 "uuid": "4b2cbea2-d397-4c23-a51a-fa242345d785" 00:13:10.078 } 00:13:10.078 ] 00:13:10.078 } 00:13:10.078 ] 00:13:10.078 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:10.078 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3356910 00:13:10.078 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:10.078 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:10.078 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:13:10.078 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:10.078 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:10.078 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:13:10.078 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:10.078 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:10.335 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.335 [2024-07-24 23:50:40.826606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:10.335 Malloc3 00:13:10.591 23:50:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:10.591 [2024-07-24 23:50:41.196328] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:10.847 23:50:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:10.847 Asynchronous Event Request test 00:13:10.847 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:10.847 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:10.847 Registering asynchronous event callbacks... 00:13:10.847 Starting namespace attribute notice tests for all controllers... 00:13:10.847 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:10.847 aer_cb - Changed Namespace 00:13:10.847 Cleaning up... 00:13:10.847 [ 00:13:10.847 { 00:13:10.847 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:10.847 "subtype": "Discovery", 00:13:10.847 "listen_addresses": [], 00:13:10.847 "allow_any_host": true, 00:13:10.847 "hosts": [] 00:13:10.847 }, 00:13:10.847 { 00:13:10.847 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:10.847 "subtype": "NVMe", 00:13:10.847 "listen_addresses": [ 00:13:10.847 { 00:13:10.847 "trtype": "VFIOUSER", 00:13:10.847 "adrfam": "IPv4", 00:13:10.847 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:10.847 "trsvcid": "0" 00:13:10.847 } 00:13:10.847 ], 00:13:10.847 "allow_any_host": true, 00:13:10.847 "hosts": [], 00:13:10.847 "serial_number": "SPDK1", 00:13:10.847 "model_number": "SPDK bdev Controller", 00:13:10.847 "max_namespaces": 32, 00:13:10.847 "min_cntlid": 1, 00:13:10.847 "max_cntlid": 65519, 00:13:10.847 "namespaces": [ 00:13:10.847 { 00:13:10.847 "nsid": 1, 00:13:10.847 "bdev_name": "Malloc1", 00:13:10.847 "name": "Malloc1", 00:13:10.847 "nguid": "11C43792BB1F43ADA807513C8426CD58", 00:13:10.847 "uuid": "11c43792-bb1f-43ad-a807-513c8426cd58" 00:13:10.847 }, 00:13:10.847 { 00:13:10.847 "nsid": 2, 00:13:10.847 "bdev_name": "Malloc3", 00:13:10.847 "name": "Malloc3", 00:13:10.847 "nguid": "6AFECB90F3F8425798C49F3DD7B9052F", 00:13:10.847 "uuid": "6afecb90-f3f8-4257-98c4-9f3dd7b9052f" 00:13:10.847 } 00:13:10.847 ] 00:13:10.847 }, 00:13:10.847 { 00:13:10.847 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:10.847 "subtype": "NVMe", 00:13:10.847 "listen_addresses": [ 00:13:10.847 { 00:13:10.847 "trtype": "VFIOUSER", 00:13:10.847 "adrfam": "IPv4", 00:13:10.847 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:10.847 "trsvcid": "0" 00:13:10.847 } 00:13:10.847 ], 00:13:10.847 "allow_any_host": true, 00:13:10.847 "hosts": [], 00:13:10.847 "serial_number": "SPDK2", 00:13:10.847 "model_number": "SPDK bdev Controller", 00:13:10.847 "max_namespaces": 32, 00:13:10.847 "min_cntlid": 1, 00:13:10.847 "max_cntlid": 65519, 00:13:10.847 "namespaces": [ 00:13:10.847 { 00:13:10.847 "nsid": 1, 00:13:10.847 "bdev_name": "Malloc2", 00:13:10.847 "name": "Malloc2", 00:13:10.847 "nguid": "4B2CBEA2D3974C23A51AFA242345D785", 00:13:10.847 "uuid": "4b2cbea2-d397-4c23-a51a-fa242345d785" 00:13:10.847 } 00:13:10.847 ] 00:13:10.847 } 00:13:10.847 ] 00:13:10.847 23:50:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3356910 00:13:10.847 23:50:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:10.847 23:50:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:10.847 23:50:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:10.847 23:50:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:11.107 [2024-07-24 23:50:41.472936] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:13:11.107 [2024-07-24 23:50:41.472981] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3356932 ] 00:13:11.107 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.107 [2024-07-24 23:50:41.505428] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:11.107 [2024-07-24 23:50:41.517416] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:11.107 [2024-07-24 23:50:41.517447] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd9e69a1000 00:13:11.107 [2024-07-24 23:50:41.518415] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.107 [2024-07-24 23:50:41.519437] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.107 [2024-07-24 23:50:41.520430] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.107 [2024-07-24 23:50:41.521438] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:11.107 [2024-07-24 23:50:41.522446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:11.107 [2024-07-24 23:50:41.523452] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.107 [2024-07-24 23:50:41.524465] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:11.107 [2024-07-24 23:50:41.525469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:11.107 [2024-07-24 23:50:41.526478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:11.107 [2024-07-24 23:50:41.526500] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd9e6996000 00:13:11.107 [2024-07-24 23:50:41.527627] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:11.107 [2024-07-24 23:50:41.542332] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:11.107 [2024-07-24 23:50:41.542369] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:11.107 [2024-07-24 23:50:41.547469] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:11.107 [2024-07-24 23:50:41.547522] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:11.107 [2024-07-24 23:50:41.547623] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:11.107 [2024-07-24 23:50:41.547644] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:11.107 [2024-07-24 23:50:41.547654] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:11.107 [2024-07-24 23:50:41.548479] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:11.107 [2024-07-24 23:50:41.548506] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:11.107 [2024-07-24 23:50:41.548520] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:11.107 [2024-07-24 23:50:41.549486] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:11.107 [2024-07-24 23:50:41.549507] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:11.107 [2024-07-24 23:50:41.549521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:11.107 [2024-07-24 23:50:41.550490] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:11.107 [2024-07-24 23:50:41.550510] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:11.107 [2024-07-24 23:50:41.551498] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:11.107 [2024-07-24 23:50:41.551519] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:11.107 [2024-07-24 23:50:41.551542] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:11.107 [2024-07-24 23:50:41.551554] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:11.107 [2024-07-24 23:50:41.551663] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:11.107 [2024-07-24 23:50:41.551671] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:11.107 [2024-07-24 23:50:41.551679] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:11.107 [2024-07-24 23:50:41.552504] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:11.107 [2024-07-24 23:50:41.553513] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:11.107 [2024-07-24 23:50:41.554516] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:11.107 [2024-07-24 23:50:41.555519] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:11.107 [2024-07-24 23:50:41.555599] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:11.107 [2024-07-24 23:50:41.556535] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:11.107 [2024-07-24 23:50:41.556556] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:11.107 [2024-07-24 23:50:41.556566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:11.107 [2024-07-24 23:50:41.556591] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:11.108 [2024-07-24 23:50:41.556608] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.556630] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:11.108 [2024-07-24 23:50:41.556640] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:11.108 [2024-07-24 23:50:41.556647] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:11.108 [2024-07-24 23:50:41.556665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:11.108 [2024-07-24 23:50:41.561259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:11.108 [2024-07-24 23:50:41.561282] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:11.108 [2024-07-24 23:50:41.561291] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:11.108 [2024-07-24 23:50:41.561299] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:11.108 [2024-07-24 23:50:41.561306] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:11.108 [2024-07-24 23:50:41.561314] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:11.108 [2024-07-24 23:50:41.561322] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:11.108 [2024-07-24 23:50:41.561330] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.561342] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.561363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:11.108 [2024-07-24 23:50:41.568254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:11.108 [2024-07-24 23:50:41.568283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.108 [2024-07-24 23:50:41.568298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.108 [2024-07-24 23:50:41.568309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.108 [2024-07-24 23:50:41.568324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:11.108 [2024-07-24 23:50:41.568334] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.568348] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.568363] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:11.108 [2024-07-24 23:50:41.577250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:11.108 [2024-07-24 23:50:41.577268] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:11.108 [2024-07-24 23:50:41.577277] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.577293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.577304] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.577318] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:11.108 [2024-07-24 23:50:41.585272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:11.108 [2024-07-24 23:50:41.585346] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.585363] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.585377] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:11.108 [2024-07-24 23:50:41.585385] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:11.108 [2024-07-24 23:50:41.585392] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:11.108 [2024-07-24 23:50:41.585401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:11.108 [2024-07-24 23:50:41.593252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:11.108 [2024-07-24 23:50:41.593275] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:11.108 [2024-07-24 23:50:41.593291] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.593306] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.593320] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:11.108 [2024-07-24 23:50:41.593328] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:11.108 [2024-07-24 23:50:41.593334] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:11.108 [2024-07-24 23:50:41.593343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:11.108 [2024-07-24 23:50:41.601253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:11.108 [2024-07-24 23:50:41.601284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.601300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.601314] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:11.108 [2024-07-24 23:50:41.601322] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:11.108 [2024-07-24 23:50:41.601328] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:11.108 [2024-07-24 23:50:41.601338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:11.108 [2024-07-24 23:50:41.609250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:11.108 [2024-07-24 23:50:41.609272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.609286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.609302] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.609315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.609323] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.609332] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.609340] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:11.108 [2024-07-24 23:50:41.609348] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:11.108 [2024-07-24 23:50:41.609356] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:11.108 [2024-07-24 23:50:41.609382] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:11.108 [2024-07-24 23:50:41.617250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:11.108 [2024-07-24 23:50:41.617277] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:11.108 [2024-07-24 23:50:41.625254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:11.108 [2024-07-24 23:50:41.625278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:11.108 [2024-07-24 23:50:41.633250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:11.108 [2024-07-24 23:50:41.633276] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:11.108 [2024-07-24 23:50:41.641267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:11.108 [2024-07-24 23:50:41.641298] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:11.108 [2024-07-24 23:50:41.641313] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:11.108 [2024-07-24 23:50:41.641320] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:11.108 [2024-07-24 23:50:41.641326] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:11.108 [2024-07-24 23:50:41.641332] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:11.108 [2024-07-24 23:50:41.641341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:11.108 [2024-07-24 23:50:41.641353] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:11.108 [2024-07-24 23:50:41.641362] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:11.109 [2024-07-24 23:50:41.641368] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:11.109 [2024-07-24 23:50:41.641377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:11.109 [2024-07-24 23:50:41.641388] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:11.109 [2024-07-24 23:50:41.641396] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:11.109 [2024-07-24 23:50:41.641402] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:11.109 [2024-07-24 23:50:41.641411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:11.109 [2024-07-24 23:50:41.641423] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:11.109 [2024-07-24 23:50:41.641432] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:11.109 [2024-07-24 23:50:41.641438] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:11.109 [2024-07-24 23:50:41.641446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:11.109 [2024-07-24 23:50:41.649265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:11.109 [2024-07-24 23:50:41.649293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:11.109 [2024-07-24 23:50:41.649311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:11.109 [2024-07-24 23:50:41.649324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:11.109 ===================================================== 00:13:11.109 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:11.109 ===================================================== 00:13:11.109 Controller Capabilities/Features 00:13:11.109 ================================ 00:13:11.109 Vendor ID: 4e58 00:13:11.109 Subsystem Vendor ID: 4e58 00:13:11.109 Serial Number: SPDK2 00:13:11.109 Model Number: SPDK bdev Controller 00:13:11.109 Firmware Version: 24.09 00:13:11.109 Recommended Arb Burst: 6 00:13:11.109 IEEE OUI Identifier: 8d 6b 50 00:13:11.109 Multi-path I/O 00:13:11.109 May have multiple subsystem ports: Yes 00:13:11.109 May have multiple controllers: Yes 00:13:11.109 Associated with SR-IOV VF: No 00:13:11.109 Max Data Transfer Size: 131072 00:13:11.109 Max Number of Namespaces: 32 00:13:11.109 Max Number of I/O Queues: 127 00:13:11.109 NVMe Specification Version (VS): 1.3 00:13:11.109 NVMe Specification Version (Identify): 1.3 00:13:11.109 Maximum Queue Entries: 256 00:13:11.109 Contiguous Queues Required: Yes 00:13:11.109 Arbitration Mechanisms Supported 00:13:11.109 Weighted Round Robin: Not Supported 00:13:11.109 Vendor Specific: Not Supported 00:13:11.109 Reset Timeout: 15000 ms 00:13:11.109 Doorbell Stride: 4 bytes 00:13:11.109 NVM Subsystem Reset: Not Supported 00:13:11.109 Command Sets Supported 00:13:11.109 NVM Command Set: Supported 00:13:11.109 Boot Partition: Not Supported 00:13:11.109 Memory Page Size Minimum: 4096 bytes 00:13:11.109 Memory Page Size Maximum: 4096 bytes 00:13:11.109 Persistent Memory Region: Not Supported 00:13:11.109 Optional Asynchronous Events Supported 00:13:11.109 Namespace Attribute Notices: Supported 00:13:11.109 Firmware Activation Notices: Not Supported 00:13:11.109 ANA Change Notices: Not Supported 00:13:11.109 PLE Aggregate Log Change Notices: Not Supported 00:13:11.109 LBA Status Info Alert Notices: Not Supported 00:13:11.109 EGE Aggregate Log Change Notices: Not Supported 00:13:11.109 Normal NVM Subsystem Shutdown event: Not Supported 00:13:11.109 Zone Descriptor Change Notices: Not Supported 00:13:11.109 Discovery Log Change Notices: Not Supported 00:13:11.109 Controller Attributes 00:13:11.109 128-bit Host Identifier: Supported 00:13:11.109 Non-Operational Permissive Mode: Not Supported 00:13:11.109 NVM Sets: Not Supported 00:13:11.109 Read Recovery Levels: Not Supported 00:13:11.109 Endurance Groups: Not Supported 00:13:11.109 Predictable Latency Mode: Not Supported 00:13:11.109 Traffic Based Keep ALive: Not Supported 00:13:11.109 Namespace Granularity: Not Supported 00:13:11.109 SQ Associations: Not Supported 00:13:11.109 UUID List: Not Supported 00:13:11.109 Multi-Domain Subsystem: Not Supported 00:13:11.109 Fixed Capacity Management: Not Supported 00:13:11.109 Variable Capacity Management: Not Supported 00:13:11.109 Delete Endurance Group: Not Supported 00:13:11.109 Delete NVM Set: Not Supported 00:13:11.109 Extended LBA Formats Supported: Not Supported 00:13:11.109 Flexible Data Placement Supported: Not Supported 00:13:11.109 00:13:11.109 Controller Memory Buffer Support 00:13:11.109 ================================ 00:13:11.109 Supported: No 00:13:11.109 00:13:11.109 Persistent Memory Region Support 00:13:11.109 ================================ 00:13:11.109 Supported: No 00:13:11.109 00:13:11.109 Admin Command Set Attributes 00:13:11.109 ============================ 00:13:11.109 Security Send/Receive: Not Supported 00:13:11.109 Format NVM: Not Supported 00:13:11.109 Firmware Activate/Download: Not Supported 00:13:11.109 Namespace Management: Not Supported 00:13:11.109 Device Self-Test: Not Supported 00:13:11.109 Directives: Not Supported 00:13:11.109 NVMe-MI: Not Supported 00:13:11.109 Virtualization Management: Not Supported 00:13:11.109 Doorbell Buffer Config: Not Supported 00:13:11.109 Get LBA Status Capability: Not Supported 00:13:11.109 Command & Feature Lockdown Capability: Not Supported 00:13:11.109 Abort Command Limit: 4 00:13:11.109 Async Event Request Limit: 4 00:13:11.109 Number of Firmware Slots: N/A 00:13:11.109 Firmware Slot 1 Read-Only: N/A 00:13:11.109 Firmware Activation Without Reset: N/A 00:13:11.109 Multiple Update Detection Support: N/A 00:13:11.109 Firmware Update Granularity: No Information Provided 00:13:11.109 Per-Namespace SMART Log: No 00:13:11.109 Asymmetric Namespace Access Log Page: Not Supported 00:13:11.109 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:11.109 Command Effects Log Page: Supported 00:13:11.109 Get Log Page Extended Data: Supported 00:13:11.109 Telemetry Log Pages: Not Supported 00:13:11.109 Persistent Event Log Pages: Not Supported 00:13:11.109 Supported Log Pages Log Page: May Support 00:13:11.109 Commands Supported & Effects Log Page: Not Supported 00:13:11.109 Feature Identifiers & Effects Log Page:May Support 00:13:11.109 NVMe-MI Commands & Effects Log Page: May Support 00:13:11.109 Data Area 4 for Telemetry Log: Not Supported 00:13:11.109 Error Log Page Entries Supported: 128 00:13:11.109 Keep Alive: Supported 00:13:11.109 Keep Alive Granularity: 10000 ms 00:13:11.109 00:13:11.109 NVM Command Set Attributes 00:13:11.109 ========================== 00:13:11.109 Submission Queue Entry Size 00:13:11.109 Max: 64 00:13:11.109 Min: 64 00:13:11.109 Completion Queue Entry Size 00:13:11.109 Max: 16 00:13:11.109 Min: 16 00:13:11.109 Number of Namespaces: 32 00:13:11.109 Compare Command: Supported 00:13:11.109 Write Uncorrectable Command: Not Supported 00:13:11.109 Dataset Management Command: Supported 00:13:11.109 Write Zeroes Command: Supported 00:13:11.109 Set Features Save Field: Not Supported 00:13:11.109 Reservations: Not Supported 00:13:11.109 Timestamp: Not Supported 00:13:11.109 Copy: Supported 00:13:11.109 Volatile Write Cache: Present 00:13:11.109 Atomic Write Unit (Normal): 1 00:13:11.109 Atomic Write Unit (PFail): 1 00:13:11.109 Atomic Compare & Write Unit: 1 00:13:11.109 Fused Compare & Write: Supported 00:13:11.109 Scatter-Gather List 00:13:11.109 SGL Command Set: Supported (Dword aligned) 00:13:11.109 SGL Keyed: Not Supported 00:13:11.109 SGL Bit Bucket Descriptor: Not Supported 00:13:11.109 SGL Metadata Pointer: Not Supported 00:13:11.109 Oversized SGL: Not Supported 00:13:11.109 SGL Metadata Address: Not Supported 00:13:11.109 SGL Offset: Not Supported 00:13:11.109 Transport SGL Data Block: Not Supported 00:13:11.109 Replay Protected Memory Block: Not Supported 00:13:11.109 00:13:11.109 Firmware Slot Information 00:13:11.109 ========================= 00:13:11.109 Active slot: 1 00:13:11.109 Slot 1 Firmware Revision: 24.09 00:13:11.109 00:13:11.109 00:13:11.109 Commands Supported and Effects 00:13:11.109 ============================== 00:13:11.109 Admin Commands 00:13:11.109 -------------- 00:13:11.109 Get Log Page (02h): Supported 00:13:11.109 Identify (06h): Supported 00:13:11.109 Abort (08h): Supported 00:13:11.109 Set Features (09h): Supported 00:13:11.109 Get Features (0Ah): Supported 00:13:11.109 Asynchronous Event Request (0Ch): Supported 00:13:11.109 Keep Alive (18h): Supported 00:13:11.109 I/O Commands 00:13:11.109 ------------ 00:13:11.110 Flush (00h): Supported LBA-Change 00:13:11.110 Write (01h): Supported LBA-Change 00:13:11.110 Read (02h): Supported 00:13:11.110 Compare (05h): Supported 00:13:11.110 Write Zeroes (08h): Supported LBA-Change 00:13:11.110 Dataset Management (09h): Supported LBA-Change 00:13:11.110 Copy (19h): Supported LBA-Change 00:13:11.110 00:13:11.110 Error Log 00:13:11.110 ========= 00:13:11.110 00:13:11.110 Arbitration 00:13:11.110 =========== 00:13:11.110 Arbitration Burst: 1 00:13:11.110 00:13:11.110 Power Management 00:13:11.110 ================ 00:13:11.110 Number of Power States: 1 00:13:11.110 Current Power State: Power State #0 00:13:11.110 Power State #0: 00:13:11.110 Max Power: 0.00 W 00:13:11.110 Non-Operational State: Operational 00:13:11.110 Entry Latency: Not Reported 00:13:11.110 Exit Latency: Not Reported 00:13:11.110 Relative Read Throughput: 0 00:13:11.110 Relative Read Latency: 0 00:13:11.110 Relative Write Throughput: 0 00:13:11.110 Relative Write Latency: 0 00:13:11.110 Idle Power: Not Reported 00:13:11.110 Active Power: Not Reported 00:13:11.110 Non-Operational Permissive Mode: Not Supported 00:13:11.110 00:13:11.110 Health Information 00:13:11.110 ================== 00:13:11.110 Critical Warnings: 00:13:11.110 Available Spare Space: OK 00:13:11.110 Temperature: OK 00:13:11.110 Device Reliability: OK 00:13:11.110 Read Only: No 00:13:11.110 Volatile Memory Backup: OK 00:13:11.110 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:11.110 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:11.110 Available Spare: 0% 00:13:11.110 Available Sp[2024-07-24 23:50:41.649448] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:11.110 [2024-07-24 23:50:41.657253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:11.110 [2024-07-24 23:50:41.657303] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:11.110 [2024-07-24 23:50:41.657321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.110 [2024-07-24 23:50:41.657332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.110 [2024-07-24 23:50:41.657342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.110 [2024-07-24 23:50:41.657352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:11.110 [2024-07-24 23:50:41.657430] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:11.110 [2024-07-24 23:50:41.657455] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:11.110 [2024-07-24 23:50:41.658435] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:11.110 [2024-07-24 23:50:41.661278] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:11.110 [2024-07-24 23:50:41.661302] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:11.110 [2024-07-24 23:50:41.661456] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:11.110 [2024-07-24 23:50:41.661479] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:11.110 [2024-07-24 23:50:41.661533] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:11.110 [2024-07-24 23:50:41.662763] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:11.110 are Threshold: 0% 00:13:11.110 Life Percentage Used: 0% 00:13:11.110 Data Units Read: 0 00:13:11.110 Data Units Written: 0 00:13:11.110 Host Read Commands: 0 00:13:11.110 Host Write Commands: 0 00:13:11.110 Controller Busy Time: 0 minutes 00:13:11.110 Power Cycles: 0 00:13:11.110 Power On Hours: 0 hours 00:13:11.110 Unsafe Shutdowns: 0 00:13:11.110 Unrecoverable Media Errors: 0 00:13:11.110 Lifetime Error Log Entries: 0 00:13:11.110 Warning Temperature Time: 0 minutes 00:13:11.110 Critical Temperature Time: 0 minutes 00:13:11.110 00:13:11.110 Number of Queues 00:13:11.110 ================ 00:13:11.110 Number of I/O Submission Queues: 127 00:13:11.110 Number of I/O Completion Queues: 127 00:13:11.110 00:13:11.110 Active Namespaces 00:13:11.110 ================= 00:13:11.110 Namespace ID:1 00:13:11.110 Error Recovery Timeout: Unlimited 00:13:11.110 Command Set Identifier: NVM (00h) 00:13:11.110 Deallocate: Supported 00:13:11.110 Deallocated/Unwritten Error: Not Supported 00:13:11.110 Deallocated Read Value: Unknown 00:13:11.110 Deallocate in Write Zeroes: Not Supported 00:13:11.110 Deallocated Guard Field: 0xFFFF 00:13:11.110 Flush: Supported 00:13:11.110 Reservation: Supported 00:13:11.110 Namespace Sharing Capabilities: Multiple Controllers 00:13:11.110 Size (in LBAs): 131072 (0GiB) 00:13:11.110 Capacity (in LBAs): 131072 (0GiB) 00:13:11.110 Utilization (in LBAs): 131072 (0GiB) 00:13:11.110 NGUID: 4B2CBEA2D3974C23A51AFA242345D785 00:13:11.110 UUID: 4b2cbea2-d397-4c23-a51a-fa242345d785 00:13:11.110 Thin Provisioning: Not Supported 00:13:11.110 Per-NS Atomic Units: Yes 00:13:11.110 Atomic Boundary Size (Normal): 0 00:13:11.110 Atomic Boundary Size (PFail): 0 00:13:11.110 Atomic Boundary Offset: 0 00:13:11.110 Maximum Single Source Range Length: 65535 00:13:11.110 Maximum Copy Length: 65535 00:13:11.110 Maximum Source Range Count: 1 00:13:11.110 NGUID/EUI64 Never Reused: No 00:13:11.110 Namespace Write Protected: No 00:13:11.110 Number of LBA Formats: 1 00:13:11.110 Current LBA Format: LBA Format #00 00:13:11.110 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:11.110 00:13:11.110 23:50:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:11.368 EAL: No free 2048 kB hugepages reported on node 1 00:13:11.368 [2024-07-24 23:50:41.900120] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:16.625 Initializing NVMe Controllers 00:13:16.625 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:16.625 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:16.625 Initialization complete. Launching workers. 00:13:16.625 ======================================================== 00:13:16.625 Latency(us) 00:13:16.625 Device Information : IOPS MiB/s Average min max 00:13:16.625 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33628.11 131.36 3806.76 1188.99 9631.08 00:13:16.625 ======================================================== 00:13:16.625 Total : 33628.11 131.36 3806.76 1188.99 9631.08 00:13:16.625 00:13:16.625 [2024-07-24 23:50:47.004611] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:16.625 23:50:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:16.625 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.882 [2024-07-24 23:50:47.238239] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:22.141 Initializing NVMe Controllers 00:13:22.141 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:22.141 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:22.141 Initialization complete. Launching workers. 00:13:22.141 ======================================================== 00:13:22.141 Latency(us) 00:13:22.141 Device Information : IOPS MiB/s Average min max 00:13:22.141 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31678.73 123.75 4040.28 1220.39 7596.43 00:13:22.141 ======================================================== 00:13:22.142 Total : 31678.73 123.75 4040.28 1220.39 7596.43 00:13:22.142 00:13:22.142 [2024-07-24 23:50:52.259371] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:22.142 23:50:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:22.142 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.142 [2024-07-24 23:50:52.470373] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:27.442 [2024-07-24 23:50:57.612391] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:27.442 Initializing NVMe Controllers 00:13:27.442 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:27.442 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:27.442 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:27.442 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:27.442 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:27.442 Initialization complete. Launching workers. 00:13:27.442 Starting thread on core 2 00:13:27.442 Starting thread on core 3 00:13:27.442 Starting thread on core 1 00:13:27.442 23:50:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:27.442 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.442 [2024-07-24 23:50:57.916682] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:30.754 [2024-07-24 23:51:00.974716] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:30.754 Initializing NVMe Controllers 00:13:30.754 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:30.754 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:30.754 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:30.754 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:30.754 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:30.754 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:30.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:30.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:30.754 Initialization complete. Launching workers. 00:13:30.754 Starting thread on core 1 with urgent priority queue 00:13:30.754 Starting thread on core 2 with urgent priority queue 00:13:30.754 Starting thread on core 3 with urgent priority queue 00:13:30.754 Starting thread on core 0 with urgent priority queue 00:13:30.754 SPDK bdev Controller (SPDK2 ) core 0: 5064.33 IO/s 19.75 secs/100000 ios 00:13:30.754 SPDK bdev Controller (SPDK2 ) core 1: 5707.33 IO/s 17.52 secs/100000 ios 00:13:30.754 SPDK bdev Controller (SPDK2 ) core 2: 5306.67 IO/s 18.84 secs/100000 ios 00:13:30.754 SPDK bdev Controller (SPDK2 ) core 3: 5502.00 IO/s 18.18 secs/100000 ios 00:13:30.754 ======================================================== 00:13:30.754 00:13:30.754 23:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:30.754 EAL: No free 2048 kB hugepages reported on node 1 00:13:30.754 [2024-07-24 23:51:01.275737] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:30.754 Initializing NVMe Controllers 00:13:30.754 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:30.754 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:30.754 Namespace ID: 1 size: 0GB 00:13:30.754 Initialization complete. 00:13:30.754 INFO: using host memory buffer for IO 00:13:30.754 Hello world! 00:13:30.755 [2024-07-24 23:51:01.284800] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:30.755 23:51:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:31.011 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.011 [2024-07-24 23:51:01.570669] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:32.381 Initializing NVMe Controllers 00:13:32.381 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:32.381 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:32.381 Initialization complete. Launching workers. 00:13:32.381 submit (in ns) avg, min, max = 9744.0, 3538.9, 4017030.0 00:13:32.381 complete (in ns) avg, min, max = 23511.4, 2070.0, 4015827.8 00:13:32.381 00:13:32.381 Submit histogram 00:13:32.381 ================ 00:13:32.381 Range in us Cumulative Count 00:13:32.381 3.532 - 3.556: 0.1525% ( 20) 00:13:32.381 3.556 - 3.579: 1.9369% ( 234) 00:13:32.381 3.579 - 3.603: 4.3999% ( 323) 00:13:32.381 3.603 - 3.627: 10.5689% ( 809) 00:13:32.381 3.627 - 3.650: 21.1453% ( 1387) 00:13:32.381 3.650 - 3.674: 33.7273% ( 1650) 00:13:32.381 3.674 - 3.698: 42.8321% ( 1194) 00:13:32.381 3.698 - 3.721: 49.0773% ( 819) 00:13:32.381 3.721 - 3.745: 53.3628% ( 562) 00:13:32.381 3.745 - 3.769: 57.8084% ( 583) 00:13:32.381 3.769 - 3.793: 61.9796% ( 547) 00:13:32.381 3.793 - 3.816: 65.3653% ( 444) 00:13:32.381 3.816 - 3.840: 68.2782% ( 382) 00:13:32.381 3.840 - 3.864: 71.5037% ( 423) 00:13:32.381 3.864 - 3.887: 75.8121% ( 565) 00:13:32.381 3.887 - 3.911: 80.0519% ( 556) 00:13:32.381 3.911 - 3.935: 83.9637% ( 513) 00:13:32.381 3.935 - 3.959: 86.6174% ( 348) 00:13:32.381 3.959 - 3.982: 88.4246% ( 237) 00:13:32.381 3.982 - 4.006: 90.0488% ( 213) 00:13:32.381 4.006 - 4.030: 91.2765% ( 161) 00:13:32.381 4.030 - 4.053: 92.2754% ( 131) 00:13:32.381 4.053 - 4.077: 93.4269% ( 151) 00:13:32.381 4.077 - 4.101: 94.3267% ( 118) 00:13:32.381 4.101 - 4.124: 95.0282% ( 92) 00:13:32.381 4.124 - 4.148: 95.6230% ( 78) 00:13:32.381 4.148 - 4.172: 96.0348% ( 54) 00:13:32.381 4.172 - 4.196: 96.4160% ( 50) 00:13:32.381 4.196 - 4.219: 96.6219% ( 27) 00:13:32.381 4.219 - 4.243: 96.6982% ( 10) 00:13:32.381 4.243 - 4.267: 96.8126% ( 15) 00:13:32.381 4.267 - 4.290: 96.9422% ( 17) 00:13:32.381 4.290 - 4.314: 97.0490% ( 14) 00:13:32.381 4.314 - 4.338: 97.1328% ( 11) 00:13:32.381 4.338 - 4.361: 97.2167% ( 11) 00:13:32.381 4.361 - 4.385: 97.2548% ( 5) 00:13:32.381 4.385 - 4.409: 97.3006% ( 6) 00:13:32.381 4.409 - 4.433: 97.3311% ( 4) 00:13:32.381 4.433 - 4.456: 97.3463% ( 2) 00:13:32.381 4.456 - 4.480: 97.3997% ( 7) 00:13:32.381 4.480 - 4.504: 97.4150% ( 2) 00:13:32.381 4.504 - 4.527: 97.4379% ( 3) 00:13:32.381 4.527 - 4.551: 97.4607% ( 3) 00:13:32.381 4.551 - 4.575: 97.4836% ( 3) 00:13:32.381 4.575 - 4.599: 97.4989% ( 2) 00:13:32.381 4.599 - 4.622: 97.5141% ( 2) 00:13:32.381 4.646 - 4.670: 97.5370% ( 3) 00:13:32.381 4.693 - 4.717: 97.5446% ( 1) 00:13:32.381 4.717 - 4.741: 97.5522% ( 1) 00:13:32.381 4.741 - 4.764: 97.5827% ( 4) 00:13:32.381 4.764 - 4.788: 97.6209% ( 5) 00:13:32.381 4.788 - 4.812: 97.6742% ( 7) 00:13:32.381 4.812 - 4.836: 97.7124% ( 5) 00:13:32.381 4.836 - 4.859: 97.7505% ( 5) 00:13:32.381 4.859 - 4.883: 97.7886% ( 5) 00:13:32.381 4.883 - 4.907: 97.8420% ( 7) 00:13:32.381 4.907 - 4.930: 97.9259% ( 11) 00:13:32.381 4.930 - 4.954: 97.9793% ( 7) 00:13:32.381 4.954 - 4.978: 98.0326% ( 7) 00:13:32.381 4.978 - 5.001: 98.0784% ( 6) 00:13:32.381 5.001 - 5.025: 98.1013% ( 3) 00:13:32.381 5.025 - 5.049: 98.1394% ( 5) 00:13:32.381 5.049 - 5.073: 98.1623% ( 3) 00:13:32.381 5.073 - 5.096: 98.1928% ( 4) 00:13:32.381 5.096 - 5.120: 98.2080% ( 2) 00:13:32.381 5.120 - 5.144: 98.2156% ( 1) 00:13:32.381 5.144 - 5.167: 98.2233% ( 1) 00:13:32.381 5.191 - 5.215: 98.2309% ( 1) 00:13:32.381 5.215 - 5.239: 98.2614% ( 4) 00:13:32.381 5.239 - 5.262: 98.2919% ( 4) 00:13:32.381 5.262 - 5.286: 98.3148% ( 3) 00:13:32.381 5.286 - 5.310: 98.3300% ( 2) 00:13:32.381 5.357 - 5.381: 98.3529% ( 3) 00:13:32.381 5.404 - 5.428: 98.3605% ( 1) 00:13:32.381 5.452 - 5.476: 98.3682% ( 1) 00:13:32.381 5.476 - 5.499: 98.3758% ( 1) 00:13:32.381 5.499 - 5.523: 98.3834% ( 1) 00:13:32.381 5.807 - 5.831: 98.3910% ( 1) 00:13:32.381 6.068 - 6.116: 98.3987% ( 1) 00:13:32.381 6.258 - 6.305: 98.4063% ( 1) 00:13:32.381 6.353 - 6.400: 98.4139% ( 1) 00:13:32.381 6.495 - 6.542: 98.4215% ( 1) 00:13:32.381 6.732 - 6.779: 98.4368% ( 2) 00:13:32.381 7.016 - 7.064: 98.4444% ( 1) 00:13:32.381 7.206 - 7.253: 98.4520% ( 1) 00:13:32.381 7.253 - 7.301: 98.4673% ( 2) 00:13:32.381 7.348 - 7.396: 98.4749% ( 1) 00:13:32.381 7.490 - 7.538: 98.4825% ( 1) 00:13:32.381 7.538 - 7.585: 98.4978% ( 2) 00:13:32.381 7.633 - 7.680: 98.5207% ( 3) 00:13:32.381 7.680 - 7.727: 98.5435% ( 3) 00:13:32.381 7.727 - 7.775: 98.5512% ( 1) 00:13:32.381 7.822 - 7.870: 98.5664% ( 2) 00:13:32.381 7.870 - 7.917: 98.5893% ( 3) 00:13:32.381 7.917 - 7.964: 98.6045% ( 2) 00:13:32.381 8.059 - 8.107: 98.6122% ( 1) 00:13:32.381 8.107 - 8.154: 98.6198% ( 1) 00:13:32.381 8.154 - 8.201: 98.6274% ( 1) 00:13:32.381 8.201 - 8.249: 98.6350% ( 1) 00:13:32.381 8.249 - 8.296: 98.6427% ( 1) 00:13:32.381 8.439 - 8.486: 98.6503% ( 1) 00:13:32.381 8.533 - 8.581: 98.6579% ( 1) 00:13:32.381 8.676 - 8.723: 98.6732% ( 2) 00:13:32.381 8.818 - 8.865: 98.6808% ( 1) 00:13:32.381 8.913 - 8.960: 98.6884% ( 1) 00:13:32.381 9.007 - 9.055: 98.6961% ( 1) 00:13:32.381 9.244 - 9.292: 98.7037% ( 1) 00:13:32.381 9.387 - 9.434: 98.7113% ( 1) 00:13:32.381 9.481 - 9.529: 98.7189% ( 1) 00:13:32.381 9.529 - 9.576: 98.7342% ( 2) 00:13:32.381 9.671 - 9.719: 98.7418% ( 1) 00:13:32.381 9.908 - 9.956: 98.7494% ( 1) 00:13:32.381 9.956 - 10.003: 98.7571% ( 1) 00:13:32.381 10.050 - 10.098: 98.7647% ( 1) 00:13:32.381 10.240 - 10.287: 98.7723% ( 1) 00:13:32.381 10.619 - 10.667: 98.7799% ( 1) 00:13:32.381 10.951 - 10.999: 98.7876% ( 1) 00:13:32.381 10.999 - 11.046: 98.7952% ( 1) 00:13:32.381 11.141 - 11.188: 98.8028% ( 1) 00:13:32.381 11.615 - 11.662: 98.8104% ( 1) 00:13:32.382 11.994 - 12.041: 98.8181% ( 1) 00:13:32.382 12.136 - 12.231: 98.8257% ( 1) 00:13:32.382 12.516 - 12.610: 98.8409% ( 2) 00:13:32.382 12.895 - 12.990: 98.8486% ( 1) 00:13:32.382 12.990 - 13.084: 98.8562% ( 1) 00:13:32.382 13.274 - 13.369: 98.8714% ( 2) 00:13:32.382 13.653 - 13.748: 98.8791% ( 1) 00:13:32.382 13.748 - 13.843: 98.8867% ( 1) 00:13:32.382 13.843 - 13.938: 98.9019% ( 2) 00:13:32.382 14.033 - 14.127: 98.9096% ( 1) 00:13:32.382 14.222 - 14.317: 98.9172% ( 1) 00:13:32.382 14.791 - 14.886: 98.9248% ( 1) 00:13:32.382 14.886 - 14.981: 98.9324% ( 1) 00:13:32.382 14.981 - 15.076: 98.9401% ( 1) 00:13:32.382 17.067 - 17.161: 98.9553% ( 2) 00:13:32.382 17.256 - 17.351: 98.9706% ( 2) 00:13:32.382 17.351 - 17.446: 99.0011% ( 4) 00:13:32.382 17.446 - 17.541: 99.0239% ( 3) 00:13:32.382 17.541 - 17.636: 99.0621% ( 5) 00:13:32.382 17.636 - 17.730: 99.1231% ( 8) 00:13:32.382 17.730 - 17.825: 99.1765% ( 7) 00:13:32.382 17.825 - 17.920: 99.2298% ( 7) 00:13:32.382 17.920 - 18.015: 99.2603% ( 4) 00:13:32.382 18.015 - 18.110: 99.2985% ( 5) 00:13:32.382 18.110 - 18.204: 99.3976% ( 13) 00:13:32.382 18.204 - 18.299: 99.4738% ( 10) 00:13:32.382 18.299 - 18.394: 99.5425% ( 9) 00:13:32.382 18.394 - 18.489: 99.6035% ( 8) 00:13:32.382 18.489 - 18.584: 99.6340% ( 4) 00:13:32.382 18.584 - 18.679: 99.6645% ( 4) 00:13:32.382 18.679 - 18.773: 99.6950% ( 4) 00:13:32.382 18.868 - 18.963: 99.7102% ( 2) 00:13:32.382 18.963 - 19.058: 99.7179% ( 1) 00:13:32.382 19.058 - 19.153: 99.7255% ( 1) 00:13:32.382 19.153 - 19.247: 99.7407% ( 2) 00:13:32.382 19.247 - 19.342: 99.7560% ( 2) 00:13:32.382 19.342 - 19.437: 99.7636% ( 1) 00:13:32.382 19.721 - 19.816: 99.7712% ( 1) 00:13:32.382 20.006 - 20.101: 99.7789% ( 1) 00:13:32.382 20.859 - 20.954: 99.7865% ( 1) 00:13:32.382 21.618 - 21.713: 99.8017% ( 2) 00:13:32.382 21.713 - 21.807: 99.8094% ( 1) 00:13:32.382 22.566 - 22.661: 99.8170% ( 1) 00:13:32.382 23.040 - 23.135: 99.8246% ( 1) 00:13:32.382 25.600 - 25.790: 99.8322% ( 1) 00:13:32.382 25.979 - 26.169: 99.8399% ( 1) 00:13:32.382 28.824 - 29.013: 99.8475% ( 1) 00:13:32.382 29.203 - 29.393: 99.8551% ( 1) 00:13:32.382 3980.705 - 4004.978: 99.9847% ( 17) 00:13:32.382 4004.978 - 4029.250: 100.0000% ( 2) 00:13:32.382 00:13:32.382 Complete histogram 00:13:32.382 ================== 00:13:32.382 Range in us Cumulative Count 00:13:32.382 2.062 - 2.074: 0.3050% ( 40) 00:13:32.382 2.074 - 2.086: 24.3099% ( 3148) 00:13:32.382 2.086 - 2.098: 38.6838% ( 1885) 00:13:32.382 2.098 - 2.110: 41.2460% ( 336) 00:13:32.382 2.110 - 2.121: 57.1298% ( 2083) 00:13:32.382 2.121 - 2.133: 61.5220% ( 576) 00:13:32.382 2.133 - 2.145: 64.2062% ( 352) 00:13:32.382 2.145 - 2.157: 72.2510% ( 1055) 00:13:32.382 2.157 - 2.169: 74.4624% ( 290) 00:13:32.382 2.169 - 2.181: 76.3840% ( 252) 00:13:32.382 2.181 - 2.193: 81.1957% ( 631) 00:13:32.382 2.193 - 2.204: 82.3395% ( 150) 00:13:32.382 2.204 - 2.216: 83.1402% ( 105) 00:13:32.382 2.216 - 2.228: 86.6936% ( 466) 00:13:32.382 2.228 - 2.240: 89.2024% ( 329) 00:13:32.382 2.240 - 2.252: 90.9181% ( 225) 00:13:32.382 2.252 - 2.264: 93.2134% ( 301) 00:13:32.382 2.264 - 2.276: 93.8005% ( 77) 00:13:32.382 2.276 - 2.287: 94.0445% ( 32) 00:13:32.382 2.287 - 2.299: 94.3801% ( 44) 00:13:32.382 2.299 - 2.311: 95.0511% ( 88) 00:13:32.382 2.311 - 2.323: 95.5162% ( 61) 00:13:32.382 2.323 - 2.335: 95.6001% ( 11) 00:13:32.382 2.335 - 2.347: 95.7221% ( 16) 00:13:32.382 2.347 - 2.359: 95.9585% ( 31) 00:13:32.382 2.359 - 2.370: 96.2025% ( 32) 00:13:32.382 2.370 - 2.382: 96.5457% ( 45) 00:13:32.382 2.382 - 2.394: 97.0337% ( 64) 00:13:32.382 2.394 - 2.406: 97.3845% ( 46) 00:13:32.382 2.406 - 2.418: 97.5446% ( 21) 00:13:32.382 2.418 - 2.430: 97.6666% ( 16) 00:13:32.382 2.430 - 2.441: 97.7962% ( 17) 00:13:32.382 2.441 - 2.453: 97.8725% ( 10) 00:13:32.382 2.453 - 2.465: 98.0708% ( 26) 00:13:32.382 2.465 - 2.477: 98.1318% ( 8) 00:13:32.382 2.477 - 2.489: 98.1928% ( 8) 00:13:32.382 2.489 - 2.501: 98.2461% ( 7) 00:13:32.382 2.501 - 2.513: 98.2919% ( 6) 00:13:32.382 2.513 - 2.524: 98.3148% ( 3) 00:13:32.382 2.524 - 2.536: 98.3377% ( 3) 00:13:32.382 2.548 - 2.560: 98.3682% ( 4) 00:13:32.382 2.560 - 2.572: 98.3758% ( 1) 00:13:32.382 2.572 - 2.584: 98.3910% ( 2) 00:13:32.382 2.584 - 2.596: 98.3987% ( 1) 00:13:32.382 2.619 - 2.631: 98.4063% ( 1) 00:13:32.382 2.631 - 2.643: 98.4139% ( 1) 00:13:32.382 2.643 - 2.655: 98.4215% ( 1) 00:13:32.382 2.690 - 2.702: 98.4292% ( 1) 00:13:32.382 2.714 - 2.726: 98.4444% ( 2) 00:13:32.382 2.761 - 2.773: 98.4597% ( 2) 00:13:32.382 2.868 - 2.880: 98.4673% ( 1) 00:13:32.382 2.951 - 2.963: 98.4749% ( 1) 00:13:32.382 2.963 - 2.975: 98.4825% ( 1) 00:13:32.382 3.129 - 3.153: 98.4902% ( 1) 00:13:32.382 3.271 - 3.295: 98.4978% ( 1) 00:13:32.382 3.342 - 3.366: 98.5054% ( 1) 00:13:32.382 3.366 - 3.390: 98.5207% ( 2) 00:13:32.382 3.390 - 3.413: 98.5359% ( 2) 00:13:32.382 3.413 - 3.437: 98.5435% ( 1) 00:13:32.382 3.437 - 3.461: 98.5664% ( 3) 00:13:32.382 3.461 - 3.484: 98.5740% ( 1) 00:13:32.382 3.484 - 3.508: 98.5817% ( 1) 00:13:32.382 3.508 - 3.532: 98.5969% ( 2) 00:13:32.382 3.532 - 3.556: 98.6122% ( 2) 00:13:32.382 3.556 - 3.579: 98.6198% ( 1) 00:13:32.382 3.579 - 3.603: 98.6427% ( 3) 00:13:32.382 3.603 - 3.627: 98.6579% ( 2) 00:13:32.382 3.650 - 3.674: 98.6655% ( 1) 00:13:32.382 3.674 - 3.698: 98.6732% ( 1) 00:13:32.382 3.698 - 3.721: 98.6884% ( 2) 00:13:32.382 3.769 - 3.793: 98.6961% ( 1) 00:13:32.382 3.793 - 3.816: 98.7037% ( 1) 00:13:32.382 3.864 - 3.887: 98.7113% ( 1) 00:13:32.382 3.887 - 3.911: 98.7266% ( 2) 00:13:32.382 4.053 - 4.077: 98.7342% ( 1) 00:13:32.382 4.433 - 4.456: 98.7418% ( 1) 00:13:32.382 5.073 - 5.096: 98.7494% ( 1) 00:13:32.382 5.831 - 5.855: 98.7647% ( 2) 00:13:32.382 5.879 - 5.902: 98.7723% ( 1) 00:13:32.382 6.068 - 6.116: 98.7799% ( 1) 00:13:32.382 6.116 - 6.163: 98.7952% ( 2) 00:13:32.382 6.210 - 6.258: 9[2024-07-24 23:51:02.665037] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:32.382 8.8028% ( 1) 00:13:32.382 6.258 - 6.305: 98.8333% ( 4) 00:13:32.382 6.353 - 6.400: 98.8409% ( 1) 00:13:32.382 6.400 - 6.447: 98.8486% ( 1) 00:13:32.382 6.542 - 6.590: 98.8562% ( 1) 00:13:32.382 6.684 - 6.732: 98.8638% ( 1) 00:13:32.382 6.874 - 6.921: 98.8714% ( 1) 00:13:32.382 7.159 - 7.206: 98.8791% ( 1) 00:13:32.382 7.348 - 7.396: 98.8867% ( 1) 00:13:32.382 7.822 - 7.870: 98.8943% ( 1) 00:13:32.382 10.240 - 10.287: 98.9019% ( 1) 00:13:32.382 11.283 - 11.330: 98.9096% ( 1) 00:13:32.382 13.084 - 13.179: 98.9172% ( 1) 00:13:32.382 15.360 - 15.455: 98.9248% ( 1) 00:13:32.382 15.455 - 15.550: 98.9401% ( 2) 00:13:32.382 15.644 - 15.739: 98.9706% ( 4) 00:13:32.382 15.929 - 16.024: 99.0163% ( 6) 00:13:32.383 16.024 - 16.119: 99.0316% ( 2) 00:13:32.383 16.119 - 16.213: 99.0621% ( 4) 00:13:32.383 16.213 - 16.308: 99.0926% ( 4) 00:13:32.383 16.308 - 16.403: 99.1078% ( 2) 00:13:32.383 16.403 - 16.498: 99.1688% ( 8) 00:13:32.383 16.498 - 16.593: 99.2375% ( 9) 00:13:32.383 16.593 - 16.687: 99.2451% ( 1) 00:13:32.383 16.687 - 16.782: 99.2603% ( 2) 00:13:32.383 16.782 - 16.877: 99.2985% ( 5) 00:13:32.383 16.877 - 16.972: 99.3518% ( 7) 00:13:32.383 16.972 - 17.067: 99.3747% ( 3) 00:13:32.383 17.067 - 17.161: 99.3900% ( 2) 00:13:32.383 17.256 - 17.351: 99.4052% ( 2) 00:13:32.383 17.351 - 17.446: 99.4128% ( 1) 00:13:32.383 17.541 - 17.636: 99.4205% ( 1) 00:13:32.383 17.636 - 17.730: 99.4281% ( 1) 00:13:32.383 17.730 - 17.825: 99.4357% ( 1) 00:13:32.383 18.204 - 18.299: 99.4433% ( 1) 00:13:32.383 18.299 - 18.394: 99.4510% ( 1) 00:13:32.383 18.489 - 18.584: 99.4586% ( 1) 00:13:32.383 19.911 - 20.006: 99.4662% ( 1) 00:13:32.383 3046.210 - 3058.347: 99.4738% ( 1) 00:13:32.383 3980.705 - 4004.978: 99.9085% ( 57) 00:13:32.383 4004.978 - 4029.250: 100.0000% ( 12) 00:13:32.383 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:32.383 [ 00:13:32.383 { 00:13:32.383 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:32.383 "subtype": "Discovery", 00:13:32.383 "listen_addresses": [], 00:13:32.383 "allow_any_host": true, 00:13:32.383 "hosts": [] 00:13:32.383 }, 00:13:32.383 { 00:13:32.383 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:32.383 "subtype": "NVMe", 00:13:32.383 "listen_addresses": [ 00:13:32.383 { 00:13:32.383 "trtype": "VFIOUSER", 00:13:32.383 "adrfam": "IPv4", 00:13:32.383 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:32.383 "trsvcid": "0" 00:13:32.383 } 00:13:32.383 ], 00:13:32.383 "allow_any_host": true, 00:13:32.383 "hosts": [], 00:13:32.383 "serial_number": "SPDK1", 00:13:32.383 "model_number": "SPDK bdev Controller", 00:13:32.383 "max_namespaces": 32, 00:13:32.383 "min_cntlid": 1, 00:13:32.383 "max_cntlid": 65519, 00:13:32.383 "namespaces": [ 00:13:32.383 { 00:13:32.383 "nsid": 1, 00:13:32.383 "bdev_name": "Malloc1", 00:13:32.383 "name": "Malloc1", 00:13:32.383 "nguid": "11C43792BB1F43ADA807513C8426CD58", 00:13:32.383 "uuid": "11c43792-bb1f-43ad-a807-513c8426cd58" 00:13:32.383 }, 00:13:32.383 { 00:13:32.383 "nsid": 2, 00:13:32.383 "bdev_name": "Malloc3", 00:13:32.383 "name": "Malloc3", 00:13:32.383 "nguid": "6AFECB90F3F8425798C49F3DD7B9052F", 00:13:32.383 "uuid": "6afecb90-f3f8-4257-98c4-9f3dd7b9052f" 00:13:32.383 } 00:13:32.383 ] 00:13:32.383 }, 00:13:32.383 { 00:13:32.383 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:32.383 "subtype": "NVMe", 00:13:32.383 "listen_addresses": [ 00:13:32.383 { 00:13:32.383 "trtype": "VFIOUSER", 00:13:32.383 "adrfam": "IPv4", 00:13:32.383 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:32.383 "trsvcid": "0" 00:13:32.383 } 00:13:32.383 ], 00:13:32.383 "allow_any_host": true, 00:13:32.383 "hosts": [], 00:13:32.383 "serial_number": "SPDK2", 00:13:32.383 "model_number": "SPDK bdev Controller", 00:13:32.383 "max_namespaces": 32, 00:13:32.383 "min_cntlid": 1, 00:13:32.383 "max_cntlid": 65519, 00:13:32.383 "namespaces": [ 00:13:32.383 { 00:13:32.383 "nsid": 1, 00:13:32.383 "bdev_name": "Malloc2", 00:13:32.383 "name": "Malloc2", 00:13:32.383 "nguid": "4B2CBEA2D3974C23A51AFA242345D785", 00:13:32.383 "uuid": "4b2cbea2-d397-4c23-a51a-fa242345d785" 00:13:32.383 } 00:13:32.383 ] 00:13:32.383 } 00:13:32.383 ] 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3359576 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1263 -- # local i=0 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # return 0 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:32.383 23:51:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:32.640 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.640 [2024-07-24 23:51:03.133711] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:32.640 Malloc4 00:13:32.900 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:32.900 [2024-07-24 23:51:03.503408] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:33.158 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:33.158 Asynchronous Event Request test 00:13:33.158 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:33.158 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:33.158 Registering asynchronous event callbacks... 00:13:33.158 Starting namespace attribute notice tests for all controllers... 00:13:33.158 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:33.158 aer_cb - Changed Namespace 00:13:33.158 Cleaning up... 00:13:33.158 [ 00:13:33.158 { 00:13:33.158 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:33.158 "subtype": "Discovery", 00:13:33.158 "listen_addresses": [], 00:13:33.158 "allow_any_host": true, 00:13:33.158 "hosts": [] 00:13:33.158 }, 00:13:33.158 { 00:13:33.158 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:33.158 "subtype": "NVMe", 00:13:33.158 "listen_addresses": [ 00:13:33.158 { 00:13:33.158 "trtype": "VFIOUSER", 00:13:33.158 "adrfam": "IPv4", 00:13:33.158 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:33.158 "trsvcid": "0" 00:13:33.158 } 00:13:33.158 ], 00:13:33.158 "allow_any_host": true, 00:13:33.158 "hosts": [], 00:13:33.158 "serial_number": "SPDK1", 00:13:33.158 "model_number": "SPDK bdev Controller", 00:13:33.158 "max_namespaces": 32, 00:13:33.158 "min_cntlid": 1, 00:13:33.158 "max_cntlid": 65519, 00:13:33.158 "namespaces": [ 00:13:33.158 { 00:13:33.158 "nsid": 1, 00:13:33.158 "bdev_name": "Malloc1", 00:13:33.158 "name": "Malloc1", 00:13:33.158 "nguid": "11C43792BB1F43ADA807513C8426CD58", 00:13:33.158 "uuid": "11c43792-bb1f-43ad-a807-513c8426cd58" 00:13:33.158 }, 00:13:33.158 { 00:13:33.158 "nsid": 2, 00:13:33.158 "bdev_name": "Malloc3", 00:13:33.158 "name": "Malloc3", 00:13:33.158 "nguid": "6AFECB90F3F8425798C49F3DD7B9052F", 00:13:33.158 "uuid": "6afecb90-f3f8-4257-98c4-9f3dd7b9052f" 00:13:33.158 } 00:13:33.158 ] 00:13:33.158 }, 00:13:33.158 { 00:13:33.158 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:33.158 "subtype": "NVMe", 00:13:33.158 "listen_addresses": [ 00:13:33.158 { 00:13:33.158 "trtype": "VFIOUSER", 00:13:33.158 "adrfam": "IPv4", 00:13:33.158 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:33.158 "trsvcid": "0" 00:13:33.158 } 00:13:33.158 ], 00:13:33.158 "allow_any_host": true, 00:13:33.158 "hosts": [], 00:13:33.158 "serial_number": "SPDK2", 00:13:33.158 "model_number": "SPDK bdev Controller", 00:13:33.158 "max_namespaces": 32, 00:13:33.158 "min_cntlid": 1, 00:13:33.158 "max_cntlid": 65519, 00:13:33.158 "namespaces": [ 00:13:33.158 { 00:13:33.158 "nsid": 1, 00:13:33.158 "bdev_name": "Malloc2", 00:13:33.158 "name": "Malloc2", 00:13:33.158 "nguid": "4B2CBEA2D3974C23A51AFA242345D785", 00:13:33.158 "uuid": "4b2cbea2-d397-4c23-a51a-fa242345d785" 00:13:33.158 }, 00:13:33.158 { 00:13:33.158 "nsid": 2, 00:13:33.158 "bdev_name": "Malloc4", 00:13:33.158 "name": "Malloc4", 00:13:33.158 "nguid": "E76FB71F0AE340BEBCC8552CC959E446", 00:13:33.158 "uuid": "e76fb71f-0ae3-40be-bcc8-552cc959e446" 00:13:33.158 } 00:13:33.158 ] 00:13:33.158 } 00:13:33.158 ] 00:13:33.416 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3359576 00:13:33.416 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:33.416 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3353974 00:13:33.416 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3353974 ']' 00:13:33.416 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3353974 00:13:33.416 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:33.416 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:33.416 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3353974 00:13:33.416 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:33.416 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:33.416 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3353974' 00:13:33.416 killing process with pid 3353974 00:13:33.416 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3353974 00:13:33.416 23:51:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3353974 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3359722 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3359722' 00:13:33.673 Process pid: 3359722 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3359722 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 3359722 ']' 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.673 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:33.673 [2024-07-24 23:51:04.239135] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:33.673 [2024-07-24 23:51:04.240213] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:13:33.673 [2024-07-24 23:51:04.240300] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.673 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.930 [2024-07-24 23:51:04.301150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.930 [2024-07-24 23:51:04.416973] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.930 [2024-07-24 23:51:04.417037] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.930 [2024-07-24 23:51:04.417050] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.930 [2024-07-24 23:51:04.417062] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.930 [2024-07-24 23:51:04.417086] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.930 [2024-07-24 23:51:04.417191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.930 [2024-07-24 23:51:04.417277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.930 [2024-07-24 23:51:04.417304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.930 [2024-07-24 23:51:04.417306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.930 [2024-07-24 23:51:04.526173] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:33.930 [2024-07-24 23:51:04.526435] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:33.930 [2024-07-24 23:51:04.526692] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:33.930 [2024-07-24 23:51:04.527394] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:33.930 [2024-07-24 23:51:04.527632] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:34.187 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:34.187 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:34.187 23:51:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:35.118 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:35.375 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:35.375 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:35.375 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:35.375 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:35.375 23:51:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:35.633 Malloc1 00:13:35.633 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:35.891 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:36.149 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:36.406 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:36.406 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:36.406 23:51:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:36.663 Malloc2 00:13:36.663 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:36.919 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:37.176 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:37.433 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:37.433 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3359722 00:13:37.433 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 3359722 ']' 00:13:37.433 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 3359722 00:13:37.433 23:51:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:37.433 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:37.433 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3359722 00:13:37.433 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:37.433 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:37.433 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3359722' 00:13:37.433 killing process with pid 3359722 00:13:37.433 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 3359722 00:13:37.433 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 3359722 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:37.999 00:13:37.999 real 0m52.724s 00:13:37.999 user 3m27.842s 00:13:37.999 sys 0m4.465s 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:37.999 ************************************ 00:13:37.999 END TEST nvmf_vfio_user 00:13:37.999 ************************************ 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:37.999 ************************************ 00:13:37.999 START TEST nvmf_vfio_user_nvme_compliance 00:13:37.999 ************************************ 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:37.999 * Looking for test storage... 00:13:37.999 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3360678 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3360678' 00:13:37.999 Process pid: 3360678 00:13:37.999 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:38.000 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3360678 00:13:38.000 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 3360678 ']' 00:13:38.000 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.000 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:38.000 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.000 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:38.000 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:38.000 [2024-07-24 23:51:08.518042] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:13:38.000 [2024-07-24 23:51:08.518121] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.000 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.000 [2024-07-24 23:51:08.575936] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:38.258 [2024-07-24 23:51:08.687582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.258 [2024-07-24 23:51:08.687640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.258 [2024-07-24 23:51:08.687658] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.258 [2024-07-24 23:51:08.687671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.258 [2024-07-24 23:51:08.687683] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.258 [2024-07-24 23:51:08.687770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.258 [2024-07-24 23:51:08.687839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.258 [2024-07-24 23:51:08.687836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:38.258 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:38.258 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:38.258 23:51:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:39.628 malloc0 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:39.628 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.629 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:39.629 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.629 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:39.629 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.629 23:51:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:39.629 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.629 00:13:39.629 00:13:39.629 CUnit - A unit testing framework for C - Version 2.1-3 00:13:39.629 http://cunit.sourceforge.net/ 00:13:39.629 00:13:39.629 00:13:39.629 Suite: nvme_compliance 00:13:39.629 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-24 23:51:10.037803] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:39.629 [2024-07-24 23:51:10.039267] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:39.629 [2024-07-24 23:51:10.039294] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:39.629 [2024-07-24 23:51:10.039308] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:39.629 [2024-07-24 23:51:10.040826] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:39.629 passed 00:13:39.629 Test: admin_identify_ctrlr_verify_fused ...[2024-07-24 23:51:10.128535] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:39.629 [2024-07-24 23:51:10.131562] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:39.629 passed 00:13:39.629 Test: admin_identify_ns ...[2024-07-24 23:51:10.217806] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:39.886 [2024-07-24 23:51:10.277259] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:39.886 [2024-07-24 23:51:10.285272] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:39.886 [2024-07-24 23:51:10.306383] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:39.886 passed 00:13:39.886 Test: admin_get_features_mandatory_features ...[2024-07-24 23:51:10.392545] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:39.886 [2024-07-24 23:51:10.395576] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:39.886 passed 00:13:39.886 Test: admin_get_features_optional_features ...[2024-07-24 23:51:10.480102] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:39.886 [2024-07-24 23:51:10.483124] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:40.143 passed 00:13:40.143 Test: admin_set_features_number_of_queues ...[2024-07-24 23:51:10.564222] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:40.143 [2024-07-24 23:51:10.671490] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:40.143 passed 00:13:40.143 Test: admin_get_log_page_mandatory_logs ...[2024-07-24 23:51:10.755463] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:40.400 [2024-07-24 23:51:10.758481] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:40.400 passed 00:13:40.400 Test: admin_get_log_page_with_lpo ...[2024-07-24 23:51:10.840913] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:40.401 [2024-07-24 23:51:10.908260] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:40.401 [2024-07-24 23:51:10.921345] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:40.401 passed 00:13:40.401 Test: fabric_property_get ...[2024-07-24 23:51:11.005023] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:40.401 [2024-07-24 23:51:11.006337] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:40.401 [2024-07-24 23:51:11.008046] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:40.657 passed 00:13:40.657 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-24 23:51:11.092606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:40.657 [2024-07-24 23:51:11.093920] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:40.657 [2024-07-24 23:51:11.095635] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:40.657 passed 00:13:40.657 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-24 23:51:11.177835] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:40.657 [2024-07-24 23:51:11.262267] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:40.913 [2024-07-24 23:51:11.277271] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:40.913 [2024-07-24 23:51:11.282350] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:40.913 passed 00:13:40.913 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-24 23:51:11.365981] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:40.913 [2024-07-24 23:51:11.367320] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:40.913 [2024-07-24 23:51:11.369009] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:40.913 passed 00:13:40.913 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-24 23:51:11.450785] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.170 [2024-07-24 23:51:11.526257] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:41.170 [2024-07-24 23:51:11.550257] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:41.170 [2024-07-24 23:51:11.555379] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.170 passed 00:13:41.170 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-24 23:51:11.638989] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.170 [2024-07-24 23:51:11.640333] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:41.170 [2024-07-24 23:51:11.640390] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:41.170 [2024-07-24 23:51:11.642008] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.170 passed 00:13:41.170 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-24 23:51:11.723157] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.428 [2024-07-24 23:51:11.816268] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:41.428 [2024-07-24 23:51:11.824251] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:41.428 [2024-07-24 23:51:11.832250] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:41.428 [2024-07-24 23:51:11.840254] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:41.428 [2024-07-24 23:51:11.869353] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.428 passed 00:13:41.428 Test: admin_create_io_sq_verify_pc ...[2024-07-24 23:51:11.950081] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:41.428 [2024-07-24 23:51:11.965279] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:41.428 [2024-07-24 23:51:11.982557] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:41.428 passed 00:13:41.685 Test: admin_create_io_qp_max_qps ...[2024-07-24 23:51:12.070109] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:42.614 [2024-07-24 23:51:13.180259] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:43.185 [2024-07-24 23:51:13.561714] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:43.185 passed 00:13:43.185 Test: admin_create_io_sq_shared_cq ...[2024-07-24 23:51:13.647090] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:43.185 [2024-07-24 23:51:13.779265] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:43.442 [2024-07-24 23:51:13.816343] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:43.442 passed 00:13:43.442 00:13:43.442 Run Summary: Type Total Ran Passed Failed Inactive 00:13:43.442 suites 1 1 n/a 0 0 00:13:43.442 tests 18 18 18 0 0 00:13:43.442 asserts 360 360 360 0 n/a 00:13:43.442 00:13:43.442 Elapsed time = 1.567 seconds 00:13:43.442 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3360678 00:13:43.442 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 3360678 ']' 00:13:43.442 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 3360678 00:13:43.443 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:43.443 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.443 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3360678 00:13:43.443 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:43.443 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:43.443 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3360678' 00:13:43.443 killing process with pid 3360678 00:13:43.443 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 3360678 00:13:43.443 23:51:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 3360678 00:13:43.700 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:43.700 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:43.700 00:13:43.700 real 0m5.813s 00:13:43.700 user 0m16.256s 00:13:43.700 sys 0m0.519s 00:13:43.700 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:43.700 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:43.700 ************************************ 00:13:43.700 END TEST nvmf_vfio_user_nvme_compliance 00:13:43.700 ************************************ 00:13:43.700 23:51:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:43.700 23:51:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:43.700 23:51:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.700 23:51:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:43.700 ************************************ 00:13:43.700 START TEST nvmf_vfio_user_fuzz 00:13:43.700 ************************************ 00:13:43.700 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:43.958 * Looking for test storage... 00:13:43.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.958 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3361546 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3361546' 00:13:43.959 Process pid: 3361546 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3361546 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 3361546 ']' 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.959 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:44.216 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.216 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:44.216 23:51:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:45.149 malloc0 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:45.149 23:51:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:17.236 Fuzzing completed. Shutting down the fuzz application 00:14:17.236 00:14:17.236 Dumping successful admin opcodes: 00:14:17.236 8, 9, 10, 24, 00:14:17.236 Dumping successful io opcodes: 00:14:17.236 0, 00:14:17.236 NS: 0x200003a1ef00 I/O qp, Total commands completed: 686795, total successful commands: 2677, random_seed: 3071579904 00:14:17.236 NS: 0x200003a1ef00 admin qp, Total commands completed: 149539, total successful commands: 1202, random_seed: 2650677184 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3361546 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 3361546 ']' 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 3361546 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3361546 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3361546' 00:14:17.236 killing process with pid 3361546 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 3361546 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 3361546 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:17.236 00:14:17.236 real 0m32.376s 00:14:17.236 user 0m33.397s 00:14:17.236 sys 0m26.552s 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:17.236 ************************************ 00:14:17.236 END TEST nvmf_vfio_user_fuzz 00:14:17.236 ************************************ 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:17.236 ************************************ 00:14:17.236 START TEST nvmf_auth_target 00:14:17.236 ************************************ 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:17.236 * Looking for test storage... 00:14:17.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.236 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:14:17.237 23:51:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:18.171 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:18.171 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:18.171 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:18.171 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.171 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:18.172 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:18.172 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.172 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.429 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:18.430 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.430 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:14:18.430 00:14:18.430 --- 10.0.0.2 ping statistics --- 00:14:18.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.430 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.430 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.430 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:14:18.430 00:14:18.430 --- 10.0.0.1 ping statistics --- 00:14:18.430 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.430 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3366986 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3366986 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3366986 ']' 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.430 23:51:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=3367104 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b26cc7ca2ae0fc088222519ff2a0055151571eaf38b1d4a0 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1in 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b26cc7ca2ae0fc088222519ff2a0055151571eaf38b1d4a0 0 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b26cc7ca2ae0fc088222519ff2a0055151571eaf38b1d4a0 0 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b26cc7ca2ae0fc088222519ff2a0055151571eaf38b1d4a0 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1in 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1in 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.1in 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=11c644f519c4bc9f6c09b3a45b10e8ec7dc2471b99e5f9a32b3eca39fa18ffc2 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.WLG 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 11c644f519c4bc9f6c09b3a45b10e8ec7dc2471b99e5f9a32b3eca39fa18ffc2 3 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 11c644f519c4bc9f6c09b3a45b10e8ec7dc2471b99e5f9a32b3eca39fa18ffc2 3 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=11c644f519c4bc9f6c09b3a45b10e8ec7dc2471b99e5f9a32b3eca39fa18ffc2 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:19.363 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:19.621 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.WLG 00:14:19.621 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.WLG 00:14:19.621 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.WLG 00:14:19.621 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:14:19.621 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:19.622 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:19.622 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:19.622 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:19.622 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:19.622 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:19.622 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a1396a47f8bcdf06c0d3c6b5857cebc0 00:14:19.622 23:51:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.04s 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a1396a47f8bcdf06c0d3c6b5857cebc0 1 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a1396a47f8bcdf06c0d3c6b5857cebc0 1 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a1396a47f8bcdf06c0d3c6b5857cebc0 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.04s 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.04s 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.04s 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=d7f85e1a478a8b0937d72dc2394cf361c4859520e5501aac 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.AEk 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key d7f85e1a478a8b0937d72dc2394cf361c4859520e5501aac 2 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 d7f85e1a478a8b0937d72dc2394cf361c4859520e5501aac 2 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=d7f85e1a478a8b0937d72dc2394cf361c4859520e5501aac 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.AEk 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.AEk 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.AEk 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c377fe89ce9a469704aefc745ef7577b57eacfc89e123b92 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.aVh 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c377fe89ce9a469704aefc745ef7577b57eacfc89e123b92 2 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c377fe89ce9a469704aefc745ef7577b57eacfc89e123b92 2 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c377fe89ce9a469704aefc745ef7577b57eacfc89e123b92 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.aVh 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.aVh 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.aVh 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=53e5fd7cfe0934ddfbc2061da4d62c44 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.QTU 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 53e5fd7cfe0934ddfbc2061da4d62c44 1 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 53e5fd7cfe0934ddfbc2061da4d62c44 1 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=53e5fd7cfe0934ddfbc2061da4d62c44 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.QTU 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.QTU 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.QTU 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c50b07f5ee66b4cfc748a5f753a46669786952007fd01fa1a9c123d987183ec8 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.epl 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c50b07f5ee66b4cfc748a5f753a46669786952007fd01fa1a9c123d987183ec8 3 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c50b07f5ee66b4cfc748a5f753a46669786952007fd01fa1a9c123d987183ec8 3 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c50b07f5ee66b4cfc748a5f753a46669786952007fd01fa1a9c123d987183ec8 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:14:19.622 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:14:19.880 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.epl 00:14:19.880 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.epl 00:14:19.880 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.epl 00:14:19.880 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:14:19.880 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 3366986 00:14:19.880 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3366986 ']' 00:14:19.880 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.880 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:19.880 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.880 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:19.880 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.137 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.138 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:20.138 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 3367104 /var/tmp/host.sock 00:14:20.138 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3367104 ']' 00:14:20.138 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:20.138 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.138 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:20.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:20.138 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.138 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.395 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.395 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:14:20.395 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:14:20.395 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.395 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.395 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.395 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:20.395 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1in 00:14:20.395 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.395 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.395 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.396 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1in 00:14:20.396 23:51:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1in 00:14:20.653 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.WLG ]] 00:14:20.653 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WLG 00:14:20.653 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.653 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.653 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.653 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WLG 00:14:20.653 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WLG 00:14:20.911 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:20.911 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.04s 00:14:20.911 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:20.911 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.911 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:20.911 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.04s 00:14:20.911 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.04s 00:14:21.168 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.AEk ]] 00:14:21.168 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AEk 00:14:21.168 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.168 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.168 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.168 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AEk 00:14:21.168 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.AEk 00:14:21.426 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:21.426 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.aVh 00:14:21.426 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.426 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.426 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.426 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.aVh 00:14:21.426 23:51:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.aVh 00:14:21.683 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.QTU ]] 00:14:21.683 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QTU 00:14:21.683 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.683 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.683 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.683 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QTU 00:14:21.683 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.QTU 00:14:21.940 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:14:21.940 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.epl 00:14:21.940 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:21.940 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.940 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:21.940 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.epl 00:14:21.940 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.epl 00:14:22.198 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:14:22.198 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:22.198 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:22.198 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:22.198 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:22.198 23:51:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:22.455 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:14:22.455 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.455 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:22.455 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:22.455 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:22.455 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.455 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.455 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:22.455 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.455 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:22.455 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.455 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:23.019 00:14:23.019 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.019 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.019 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.276 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.276 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.276 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.276 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.276 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.276 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.276 { 00:14:23.276 "cntlid": 1, 00:14:23.276 "qid": 0, 00:14:23.276 "state": "enabled", 00:14:23.276 "thread": "nvmf_tgt_poll_group_000", 00:14:23.276 "listen_address": { 00:14:23.276 "trtype": "TCP", 00:14:23.276 "adrfam": "IPv4", 00:14:23.276 "traddr": "10.0.0.2", 00:14:23.276 "trsvcid": "4420" 00:14:23.276 }, 00:14:23.276 "peer_address": { 00:14:23.276 "trtype": "TCP", 00:14:23.276 "adrfam": "IPv4", 00:14:23.276 "traddr": "10.0.0.1", 00:14:23.276 "trsvcid": "33556" 00:14:23.276 }, 00:14:23.276 "auth": { 00:14:23.276 "state": "completed", 00:14:23.276 "digest": "sha256", 00:14:23.276 "dhgroup": "null" 00:14:23.276 } 00:14:23.276 } 00:14:23.276 ]' 00:14:23.277 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:23.277 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.277 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.277 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:23.277 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.277 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.277 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.277 23:51:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.534 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:14:24.465 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.465 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.465 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.465 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.465 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.465 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:24.465 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:24.465 23:51:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:24.722 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:24.722 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:24.722 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:24.722 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:24.722 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:24.722 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.722 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.722 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:24.722 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.722 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:24.722 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.722 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.980 00:14:24.980 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:24.980 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:24.980 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.237 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.237 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.237 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.237 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.237 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:25.237 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:25.237 { 00:14:25.237 "cntlid": 3, 00:14:25.237 "qid": 0, 00:14:25.237 "state": "enabled", 00:14:25.237 "thread": "nvmf_tgt_poll_group_000", 00:14:25.237 "listen_address": { 00:14:25.237 "trtype": "TCP", 00:14:25.237 "adrfam": "IPv4", 00:14:25.237 "traddr": "10.0.0.2", 00:14:25.237 "trsvcid": "4420" 00:14:25.237 }, 00:14:25.237 "peer_address": { 00:14:25.237 "trtype": "TCP", 00:14:25.237 "adrfam": "IPv4", 00:14:25.237 "traddr": "10.0.0.1", 00:14:25.237 "trsvcid": "33570" 00:14:25.237 }, 00:14:25.237 "auth": { 00:14:25.237 "state": "completed", 00:14:25.237 "digest": "sha256", 00:14:25.237 "dhgroup": "null" 00:14:25.237 } 00:14:25.237 } 00:14:25.237 ]' 00:14:25.237 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.237 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.237 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.494 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:25.494 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.494 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.494 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.494 23:51:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.751 23:51:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:14:26.682 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.682 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:26.682 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.682 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.682 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.682 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:26.682 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:26.682 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:26.940 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:26.940 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:26.940 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:26.940 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:26.940 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:26.940 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.940 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.940 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:26.940 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.940 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.940 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.940 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:27.202 00:14:27.202 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:27.202 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:27.202 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.492 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.492 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.492 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.492 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.492 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.492 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.492 { 00:14:27.492 "cntlid": 5, 00:14:27.492 "qid": 0, 00:14:27.492 "state": "enabled", 00:14:27.492 "thread": "nvmf_tgt_poll_group_000", 00:14:27.492 "listen_address": { 00:14:27.492 "trtype": "TCP", 00:14:27.492 "adrfam": "IPv4", 00:14:27.492 "traddr": "10.0.0.2", 00:14:27.492 "trsvcid": "4420" 00:14:27.492 }, 00:14:27.492 "peer_address": { 00:14:27.492 "trtype": "TCP", 00:14:27.492 "adrfam": "IPv4", 00:14:27.492 "traddr": "10.0.0.1", 00:14:27.492 "trsvcid": "33592" 00:14:27.492 }, 00:14:27.492 "auth": { 00:14:27.492 "state": "completed", 00:14:27.492 "digest": "sha256", 00:14:27.492 "dhgroup": "null" 00:14:27.492 } 00:14:27.492 } 00:14:27.492 ]' 00:14:27.492 23:51:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:27.492 23:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:27.492 23:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:27.492 23:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:27.492 23:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:27.753 23:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.753 23:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.753 23:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.753 23:51:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:14:28.685 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.942 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.942 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:28.942 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.942 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:28.942 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:28.942 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:28.942 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:29.200 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:29.200 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.200 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:29.200 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:29.200 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:29.200 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.200 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:29.200 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.200 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.200 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.200 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:29.200 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:29.457 00:14:29.457 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:29.457 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.457 23:51:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:29.714 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.714 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.714 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.714 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.714 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.714 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:29.714 { 00:14:29.714 "cntlid": 7, 00:14:29.714 "qid": 0, 00:14:29.714 "state": "enabled", 00:14:29.714 "thread": "nvmf_tgt_poll_group_000", 00:14:29.714 "listen_address": { 00:14:29.714 "trtype": "TCP", 00:14:29.714 "adrfam": "IPv4", 00:14:29.714 "traddr": "10.0.0.2", 00:14:29.714 "trsvcid": "4420" 00:14:29.714 }, 00:14:29.714 "peer_address": { 00:14:29.714 "trtype": "TCP", 00:14:29.714 "adrfam": "IPv4", 00:14:29.714 "traddr": "10.0.0.1", 00:14:29.714 "trsvcid": "54254" 00:14:29.714 }, 00:14:29.714 "auth": { 00:14:29.714 "state": "completed", 00:14:29.714 "digest": "sha256", 00:14:29.714 "dhgroup": "null" 00:14:29.714 } 00:14:29.714 } 00:14:29.714 ]' 00:14:29.715 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:29.715 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:29.715 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:29.715 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:29.715 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:29.715 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.715 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.715 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.972 23:52:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:14:30.903 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.903 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:30.903 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:30.903 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.903 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:30.903 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:30.903 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:30.903 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:30.903 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:31.161 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:31.161 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.161 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:31.161 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:31.161 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:31.161 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.161 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.161 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.161 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.161 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.161 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.161 23:52:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.724 00:14:31.724 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:31.724 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:31.724 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.981 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.981 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.981 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:31.981 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.981 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:31.981 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:31.981 { 00:14:31.981 "cntlid": 9, 00:14:31.981 "qid": 0, 00:14:31.981 "state": "enabled", 00:14:31.981 "thread": "nvmf_tgt_poll_group_000", 00:14:31.981 "listen_address": { 00:14:31.981 "trtype": "TCP", 00:14:31.981 "adrfam": "IPv4", 00:14:31.981 "traddr": "10.0.0.2", 00:14:31.981 "trsvcid": "4420" 00:14:31.981 }, 00:14:31.981 "peer_address": { 00:14:31.981 "trtype": "TCP", 00:14:31.981 "adrfam": "IPv4", 00:14:31.981 "traddr": "10.0.0.1", 00:14:31.981 "trsvcid": "54266" 00:14:31.981 }, 00:14:31.981 "auth": { 00:14:31.981 "state": "completed", 00:14:31.981 "digest": "sha256", 00:14:31.981 "dhgroup": "ffdhe2048" 00:14:31.981 } 00:14:31.981 } 00:14:31.981 ]' 00:14:31.981 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:31.981 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:31.981 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:31.981 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:31.981 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:31.981 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.982 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.982 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.239 23:52:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:14:33.171 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.171 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.171 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.171 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.171 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.171 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.171 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.171 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:33.171 23:52:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:33.428 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:33.428 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:33.428 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:33.428 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:33.428 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:33.428 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:33.428 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.428 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.428 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.428 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.428 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.428 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.993 00:14:33.993 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:33.993 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.993 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:33.993 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.993 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.993 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.993 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.251 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:34.251 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.251 { 00:14:34.251 "cntlid": 11, 00:14:34.251 "qid": 0, 00:14:34.251 "state": "enabled", 00:14:34.251 "thread": "nvmf_tgt_poll_group_000", 00:14:34.251 "listen_address": { 00:14:34.251 "trtype": "TCP", 00:14:34.251 "adrfam": "IPv4", 00:14:34.251 "traddr": "10.0.0.2", 00:14:34.251 "trsvcid": "4420" 00:14:34.251 }, 00:14:34.251 "peer_address": { 00:14:34.251 "trtype": "TCP", 00:14:34.251 "adrfam": "IPv4", 00:14:34.251 "traddr": "10.0.0.1", 00:14:34.251 "trsvcid": "54298" 00:14:34.251 }, 00:14:34.251 "auth": { 00:14:34.251 "state": "completed", 00:14:34.251 "digest": "sha256", 00:14:34.251 "dhgroup": "ffdhe2048" 00:14:34.251 } 00:14:34.251 } 00:14:34.251 ]' 00:14:34.251 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:34.251 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:34.251 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:34.251 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:34.251 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:34.251 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:34.251 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:34.251 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:34.508 23:52:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:14:35.439 23:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:35.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:35.439 23:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.439 23:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.439 23:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.439 23:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.439 23:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:35.439 23:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:35.439 23:52:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:35.696 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:35.696 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:35.696 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:35.696 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:35.696 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:35.696 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:35.696 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.696 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:35.696 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.696 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:35.696 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.696 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.953 00:14:35.953 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:35.953 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:35.953 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:36.210 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:36.210 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:36.210 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:36.210 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.210 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:36.210 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:36.210 { 00:14:36.210 "cntlid": 13, 00:14:36.210 "qid": 0, 00:14:36.210 "state": "enabled", 00:14:36.210 "thread": "nvmf_tgt_poll_group_000", 00:14:36.210 "listen_address": { 00:14:36.210 "trtype": "TCP", 00:14:36.210 "adrfam": "IPv4", 00:14:36.210 "traddr": "10.0.0.2", 00:14:36.211 "trsvcid": "4420" 00:14:36.211 }, 00:14:36.211 "peer_address": { 00:14:36.211 "trtype": "TCP", 00:14:36.211 "adrfam": "IPv4", 00:14:36.211 "traddr": "10.0.0.1", 00:14:36.211 "trsvcid": "54322" 00:14:36.211 }, 00:14:36.211 "auth": { 00:14:36.211 "state": "completed", 00:14:36.211 "digest": "sha256", 00:14:36.211 "dhgroup": "ffdhe2048" 00:14:36.211 } 00:14:36.211 } 00:14:36.211 ]' 00:14:36.468 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:36.468 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:36.468 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:36.468 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:36.468 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:36.468 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:36.468 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:36.468 23:52:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:36.725 23:52:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:14:37.656 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:37.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:37.656 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:37.656 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.656 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.656 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.656 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:37.656 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:37.656 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:37.913 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:37.913 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:37.913 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:37.913 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:37.913 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:37.913 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:37.913 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:37.913 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:37.913 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.913 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:37.913 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:37.913 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:38.170 00:14:38.170 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:38.170 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:38.170 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:38.428 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:38.428 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:38.428 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:38.428 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.428 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:38.428 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:38.428 { 00:14:38.428 "cntlid": 15, 00:14:38.428 "qid": 0, 00:14:38.428 "state": "enabled", 00:14:38.428 "thread": "nvmf_tgt_poll_group_000", 00:14:38.428 "listen_address": { 00:14:38.428 "trtype": "TCP", 00:14:38.428 "adrfam": "IPv4", 00:14:38.428 "traddr": "10.0.0.2", 00:14:38.428 "trsvcid": "4420" 00:14:38.428 }, 00:14:38.428 "peer_address": { 00:14:38.428 "trtype": "TCP", 00:14:38.428 "adrfam": "IPv4", 00:14:38.428 "traddr": "10.0.0.1", 00:14:38.428 "trsvcid": "54338" 00:14:38.428 }, 00:14:38.428 "auth": { 00:14:38.428 "state": "completed", 00:14:38.428 "digest": "sha256", 00:14:38.428 "dhgroup": "ffdhe2048" 00:14:38.428 } 00:14:38.428 } 00:14:38.428 ]' 00:14:38.428 23:52:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:38.428 23:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:38.428 23:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:38.685 23:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:38.685 23:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:38.685 23:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:38.686 23:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:38.686 23:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:38.943 23:52:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:14:39.875 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:39.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:39.875 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:39.875 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:39.875 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.875 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:39.875 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:39.875 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:39.875 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:39.875 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:40.133 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:40.133 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:40.133 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:40.133 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:40.133 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:40.133 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.133 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.133 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.133 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.133 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.133 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.133 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:40.392 00:14:40.392 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:40.392 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:40.392 23:52:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:40.649 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:40.649 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:40.649 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:40.649 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.649 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:40.649 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:40.649 { 00:14:40.649 "cntlid": 17, 00:14:40.649 "qid": 0, 00:14:40.649 "state": "enabled", 00:14:40.649 "thread": "nvmf_tgt_poll_group_000", 00:14:40.649 "listen_address": { 00:14:40.649 "trtype": "TCP", 00:14:40.649 "adrfam": "IPv4", 00:14:40.649 "traddr": "10.0.0.2", 00:14:40.649 "trsvcid": "4420" 00:14:40.649 }, 00:14:40.649 "peer_address": { 00:14:40.649 "trtype": "TCP", 00:14:40.649 "adrfam": "IPv4", 00:14:40.649 "traddr": "10.0.0.1", 00:14:40.649 "trsvcid": "52542" 00:14:40.649 }, 00:14:40.650 "auth": { 00:14:40.650 "state": "completed", 00:14:40.650 "digest": "sha256", 00:14:40.650 "dhgroup": "ffdhe3072" 00:14:40.650 } 00:14:40.650 } 00:14:40.650 ]' 00:14:40.650 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:40.650 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:40.650 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:40.650 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:40.650 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:40.907 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:40.907 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:40.907 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:41.164 23:52:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:14:42.143 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.143 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:42.143 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.143 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.143 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.143 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:42.143 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:42.143 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:42.402 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:42.402 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:42.402 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:42.402 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:42.402 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:42.402 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:42.402 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.402 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.402 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.402 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.402 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.402 23:52:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:42.659 00:14:42.659 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.659 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.659 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.917 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.917 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.917 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:42.917 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.917 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:42.917 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.918 { 00:14:42.918 "cntlid": 19, 00:14:42.918 "qid": 0, 00:14:42.918 "state": "enabled", 00:14:42.918 "thread": "nvmf_tgt_poll_group_000", 00:14:42.918 "listen_address": { 00:14:42.918 "trtype": "TCP", 00:14:42.918 "adrfam": "IPv4", 00:14:42.918 "traddr": "10.0.0.2", 00:14:42.918 "trsvcid": "4420" 00:14:42.918 }, 00:14:42.918 "peer_address": { 00:14:42.918 "trtype": "TCP", 00:14:42.918 "adrfam": "IPv4", 00:14:42.918 "traddr": "10.0.0.1", 00:14:42.918 "trsvcid": "52590" 00:14:42.918 }, 00:14:42.918 "auth": { 00:14:42.918 "state": "completed", 00:14:42.918 "digest": "sha256", 00:14:42.918 "dhgroup": "ffdhe3072" 00:14:42.918 } 00:14:42.918 } 00:14:42.918 ]' 00:14:42.918 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.918 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.918 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.918 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:42.918 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:43.175 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:43.175 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:43.175 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:43.432 23:52:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:14:44.365 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:44.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:44.365 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.365 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.365 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.365 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.365 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:44.365 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:44.365 23:52:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:44.623 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:44.623 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:44.623 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:44.623 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:44.623 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:44.623 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:44.623 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.623 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:44.623 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.623 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:44.623 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.623 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:44.880 00:14:44.880 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.880 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.880 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.138 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.138 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.138 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.138 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.138 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.138 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.138 { 00:14:45.138 "cntlid": 21, 00:14:45.138 "qid": 0, 00:14:45.138 "state": "enabled", 00:14:45.138 "thread": "nvmf_tgt_poll_group_000", 00:14:45.138 "listen_address": { 00:14:45.138 "trtype": "TCP", 00:14:45.138 "adrfam": "IPv4", 00:14:45.138 "traddr": "10.0.0.2", 00:14:45.138 "trsvcid": "4420" 00:14:45.138 }, 00:14:45.138 "peer_address": { 00:14:45.138 "trtype": "TCP", 00:14:45.138 "adrfam": "IPv4", 00:14:45.138 "traddr": "10.0.0.1", 00:14:45.138 "trsvcid": "52616" 00:14:45.138 }, 00:14:45.138 "auth": { 00:14:45.138 "state": "completed", 00:14:45.138 "digest": "sha256", 00:14:45.138 "dhgroup": "ffdhe3072" 00:14:45.138 } 00:14:45.138 } 00:14:45.138 ]' 00:14:45.138 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.395 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.395 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.395 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:45.395 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.395 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.395 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.396 23:52:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.653 23:52:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:14:46.585 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.585 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:46.585 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.585 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.585 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.585 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.585 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:46.585 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:46.843 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:46.843 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.843 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:46.843 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:46.843 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:46.843 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.843 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:46.843 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.843 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.843 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.843 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:46.843 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:47.101 00:14:47.359 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.359 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.359 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.618 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.618 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.618 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:47.618 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.618 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:47.618 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:47.618 { 00:14:47.618 "cntlid": 23, 00:14:47.618 "qid": 0, 00:14:47.618 "state": "enabled", 00:14:47.618 "thread": "nvmf_tgt_poll_group_000", 00:14:47.618 "listen_address": { 00:14:47.618 "trtype": "TCP", 00:14:47.618 "adrfam": "IPv4", 00:14:47.618 "traddr": "10.0.0.2", 00:14:47.618 "trsvcid": "4420" 00:14:47.618 }, 00:14:47.618 "peer_address": { 00:14:47.618 "trtype": "TCP", 00:14:47.618 "adrfam": "IPv4", 00:14:47.618 "traddr": "10.0.0.1", 00:14:47.618 "trsvcid": "52646" 00:14:47.618 }, 00:14:47.618 "auth": { 00:14:47.618 "state": "completed", 00:14:47.618 "digest": "sha256", 00:14:47.618 "dhgroup": "ffdhe3072" 00:14:47.618 } 00:14:47.618 } 00:14:47.618 ]' 00:14:47.618 23:52:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:47.618 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:47.618 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:47.618 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:47.618 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:47.618 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:47.618 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:47.618 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:47.876 23:52:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:14:48.808 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.808 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.808 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:48.808 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.808 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:48.808 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:48.808 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:48.808 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:48.808 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:49.066 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:49.066 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.066 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.066 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:49.066 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:49.066 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.066 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.066 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.066 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.066 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.066 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.066 23:52:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:49.632 00:14:49.632 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:49.632 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:49.632 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.890 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.890 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.890 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:49.890 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.890 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:49.890 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:49.890 { 00:14:49.890 "cntlid": 25, 00:14:49.890 "qid": 0, 00:14:49.890 "state": "enabled", 00:14:49.890 "thread": "nvmf_tgt_poll_group_000", 00:14:49.890 "listen_address": { 00:14:49.890 "trtype": "TCP", 00:14:49.890 "adrfam": "IPv4", 00:14:49.890 "traddr": "10.0.0.2", 00:14:49.890 "trsvcid": "4420" 00:14:49.890 }, 00:14:49.890 "peer_address": { 00:14:49.890 "trtype": "TCP", 00:14:49.890 "adrfam": "IPv4", 00:14:49.890 "traddr": "10.0.0.1", 00:14:49.890 "trsvcid": "38682" 00:14:49.890 }, 00:14:49.890 "auth": { 00:14:49.890 "state": "completed", 00:14:49.890 "digest": "sha256", 00:14:49.890 "dhgroup": "ffdhe4096" 00:14:49.890 } 00:14:49.890 } 00:14:49.890 ]' 00:14:49.890 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:49.890 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.890 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:49.890 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:49.890 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:49.890 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.890 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.890 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:50.147 23:52:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:14:51.079 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.079 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:51.079 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.079 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.079 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.079 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:51.079 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:51.079 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:51.644 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:51.644 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:51.644 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:51.644 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:51.644 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:51.644 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:51.644 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.644 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:51.644 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.644 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:51.644 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.644 23:52:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.902 00:14:51.902 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:51.902 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.902 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:52.160 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.160 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.160 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:52.160 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.160 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:52.160 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:52.160 { 00:14:52.160 "cntlid": 27, 00:14:52.160 "qid": 0, 00:14:52.160 "state": "enabled", 00:14:52.160 "thread": "nvmf_tgt_poll_group_000", 00:14:52.160 "listen_address": { 00:14:52.160 "trtype": "TCP", 00:14:52.160 "adrfam": "IPv4", 00:14:52.160 "traddr": "10.0.0.2", 00:14:52.160 "trsvcid": "4420" 00:14:52.160 }, 00:14:52.160 "peer_address": { 00:14:52.160 "trtype": "TCP", 00:14:52.160 "adrfam": "IPv4", 00:14:52.160 "traddr": "10.0.0.1", 00:14:52.160 "trsvcid": "38702" 00:14:52.160 }, 00:14:52.160 "auth": { 00:14:52.160 "state": "completed", 00:14:52.160 "digest": "sha256", 00:14:52.160 "dhgroup": "ffdhe4096" 00:14:52.160 } 00:14:52.160 } 00:14:52.160 ]' 00:14:52.160 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:52.160 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.160 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:52.160 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:52.160 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:52.417 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.417 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.417 23:52:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.675 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:14:53.605 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.605 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:53.605 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.605 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.605 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.605 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:53.605 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:53.605 23:52:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:53.863 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:53.863 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:53.863 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:53.863 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:53.863 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:53.863 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.863 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.863 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:53.863 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.863 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:53.863 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.863 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:54.120 00:14:54.120 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:54.120 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.120 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:54.685 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.685 23:52:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.685 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.685 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.685 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.685 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:54.685 { 00:14:54.685 "cntlid": 29, 00:14:54.685 "qid": 0, 00:14:54.685 "state": "enabled", 00:14:54.685 "thread": "nvmf_tgt_poll_group_000", 00:14:54.685 "listen_address": { 00:14:54.685 "trtype": "TCP", 00:14:54.685 "adrfam": "IPv4", 00:14:54.685 "traddr": "10.0.0.2", 00:14:54.685 "trsvcid": "4420" 00:14:54.685 }, 00:14:54.685 "peer_address": { 00:14:54.685 "trtype": "TCP", 00:14:54.685 "adrfam": "IPv4", 00:14:54.685 "traddr": "10.0.0.1", 00:14:54.685 "trsvcid": "38738" 00:14:54.685 }, 00:14:54.685 "auth": { 00:14:54.685 "state": "completed", 00:14:54.685 "digest": "sha256", 00:14:54.685 "dhgroup": "ffdhe4096" 00:14:54.685 } 00:14:54.685 } 00:14:54.685 ]' 00:14:54.685 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:54.685 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.685 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:54.685 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:54.685 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:54.685 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.685 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.685 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.942 23:52:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:14:55.889 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.889 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:55.889 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.889 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.889 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.889 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:55.890 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:55.890 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:56.147 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:56.147 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:56.147 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:56.147 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:56.147 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:56.147 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.147 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:56.147 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.147 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.147 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.147 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:56.147 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:56.404 00:14:56.404 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:56.404 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.404 23:52:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:56.685 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.685 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.685 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.685 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.685 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.685 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:56.685 { 00:14:56.685 "cntlid": 31, 00:14:56.685 "qid": 0, 00:14:56.685 "state": "enabled", 00:14:56.685 "thread": "nvmf_tgt_poll_group_000", 00:14:56.685 "listen_address": { 00:14:56.685 "trtype": "TCP", 00:14:56.685 "adrfam": "IPv4", 00:14:56.685 "traddr": "10.0.0.2", 00:14:56.685 "trsvcid": "4420" 00:14:56.685 }, 00:14:56.685 "peer_address": { 00:14:56.685 "trtype": "TCP", 00:14:56.685 "adrfam": "IPv4", 00:14:56.685 "traddr": "10.0.0.1", 00:14:56.685 "trsvcid": "38754" 00:14:56.685 }, 00:14:56.685 "auth": { 00:14:56.685 "state": "completed", 00:14:56.685 "digest": "sha256", 00:14:56.685 "dhgroup": "ffdhe4096" 00:14:56.685 } 00:14:56.685 } 00:14:56.685 ]' 00:14:56.685 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:56.945 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:56.945 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:56.945 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:56.945 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:56.945 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.945 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.945 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.203 23:52:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:14:58.135 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.135 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:58.135 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.135 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.135 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.135 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:58.135 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:58.135 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:58.135 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:58.392 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:58.393 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:58.393 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:58.393 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:58.393 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:58.393 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.393 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.393 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:58.393 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.393 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:58.393 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.393 23:52:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.958 00:14:58.958 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:58.958 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:58.958 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:59.216 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:59.216 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:59.216 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:59.216 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.216 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:59.216 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:59.216 { 00:14:59.216 "cntlid": 33, 00:14:59.216 "qid": 0, 00:14:59.216 "state": "enabled", 00:14:59.216 "thread": "nvmf_tgt_poll_group_000", 00:14:59.216 "listen_address": { 00:14:59.216 "trtype": "TCP", 00:14:59.216 "adrfam": "IPv4", 00:14:59.216 "traddr": "10.0.0.2", 00:14:59.216 "trsvcid": "4420" 00:14:59.216 }, 00:14:59.216 "peer_address": { 00:14:59.216 "trtype": "TCP", 00:14:59.216 "adrfam": "IPv4", 00:14:59.216 "traddr": "10.0.0.1", 00:14:59.216 "trsvcid": "38772" 00:14:59.216 }, 00:14:59.216 "auth": { 00:14:59.216 "state": "completed", 00:14:59.216 "digest": "sha256", 00:14:59.216 "dhgroup": "ffdhe6144" 00:14:59.216 } 00:14:59.216 } 00:14:59.216 ]' 00:14:59.216 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:59.216 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:59.216 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:59.216 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:59.216 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:59.474 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:59.474 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:59.474 23:52:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.732 23:52:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:15:00.664 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:00.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:00.664 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:00.664 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.664 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.664 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.664 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:00.664 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:00.664 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:00.922 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:15:00.922 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:00.922 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:00.922 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:00.922 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:00.922 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.922 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.922 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.922 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.922 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.922 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.922 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.485 00:15:01.485 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:01.485 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:01.485 23:52:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:01.742 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:01.742 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:01.742 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:01.742 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.742 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:01.742 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:01.742 { 00:15:01.742 "cntlid": 35, 00:15:01.742 "qid": 0, 00:15:01.742 "state": "enabled", 00:15:01.742 "thread": "nvmf_tgt_poll_group_000", 00:15:01.742 "listen_address": { 00:15:01.742 "trtype": "TCP", 00:15:01.742 "adrfam": "IPv4", 00:15:01.742 "traddr": "10.0.0.2", 00:15:01.742 "trsvcid": "4420" 00:15:01.742 }, 00:15:01.742 "peer_address": { 00:15:01.742 "trtype": "TCP", 00:15:01.742 "adrfam": "IPv4", 00:15:01.742 "traddr": "10.0.0.1", 00:15:01.742 "trsvcid": "50962" 00:15:01.742 }, 00:15:01.742 "auth": { 00:15:01.742 "state": "completed", 00:15:01.742 "digest": "sha256", 00:15:01.742 "dhgroup": "ffdhe6144" 00:15:01.742 } 00:15:01.742 } 00:15:01.742 ]' 00:15:01.742 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:01.742 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:01.742 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:01.742 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:01.742 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:01.742 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:01.742 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:01.742 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.999 23:52:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:15:02.931 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.931 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.931 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:02.931 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:02.931 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.931 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:02.931 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:02.931 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:02.931 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:03.189 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:15:03.189 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:03.189 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:03.189 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:03.189 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:03.189 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.189 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.189 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:03.189 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.189 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:03.189 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.189 23:52:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.753 00:15:03.753 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:03.753 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:03.753 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.010 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.010 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.010 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:04.010 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.010 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:04.010 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.010 { 00:15:04.010 "cntlid": 37, 00:15:04.010 "qid": 0, 00:15:04.010 "state": "enabled", 00:15:04.010 "thread": "nvmf_tgt_poll_group_000", 00:15:04.010 "listen_address": { 00:15:04.010 "trtype": "TCP", 00:15:04.010 "adrfam": "IPv4", 00:15:04.010 "traddr": "10.0.0.2", 00:15:04.010 "trsvcid": "4420" 00:15:04.010 }, 00:15:04.010 "peer_address": { 00:15:04.010 "trtype": "TCP", 00:15:04.010 "adrfam": "IPv4", 00:15:04.010 "traddr": "10.0.0.1", 00:15:04.010 "trsvcid": "50998" 00:15:04.010 }, 00:15:04.010 "auth": { 00:15:04.010 "state": "completed", 00:15:04.010 "digest": "sha256", 00:15:04.010 "dhgroup": "ffdhe6144" 00:15:04.010 } 00:15:04.010 } 00:15:04.010 ]' 00:15:04.010 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.267 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:04.267 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.267 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:04.267 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.267 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.268 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.268 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.525 23:52:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:15:05.457 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:05.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:05.458 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:05.458 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.458 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.458 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.458 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:05.458 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:05.458 23:52:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:05.716 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:15:05.716 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:05.716 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:05.716 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:05.716 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:05.716 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:05.716 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:05.716 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:05.716 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.716 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:05.716 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:05.716 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:06.280 00:15:06.280 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.280 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.281 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.538 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.538 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.538 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:06.538 23:52:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.538 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:06.538 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:06.538 { 00:15:06.538 "cntlid": 39, 00:15:06.538 "qid": 0, 00:15:06.538 "state": "enabled", 00:15:06.538 "thread": "nvmf_tgt_poll_group_000", 00:15:06.538 "listen_address": { 00:15:06.538 "trtype": "TCP", 00:15:06.538 "adrfam": "IPv4", 00:15:06.538 "traddr": "10.0.0.2", 00:15:06.538 "trsvcid": "4420" 00:15:06.538 }, 00:15:06.538 "peer_address": { 00:15:06.538 "trtype": "TCP", 00:15:06.538 "adrfam": "IPv4", 00:15:06.538 "traddr": "10.0.0.1", 00:15:06.538 "trsvcid": "51028" 00:15:06.538 }, 00:15:06.538 "auth": { 00:15:06.538 "state": "completed", 00:15:06.538 "digest": "sha256", 00:15:06.538 "dhgroup": "ffdhe6144" 00:15:06.538 } 00:15:06.538 } 00:15:06.538 ]' 00:15:06.538 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:06.538 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:06.538 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:06.538 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:06.538 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:06.538 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:06.538 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:06.538 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:06.796 23:52:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:15:07.737 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.737 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.737 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:07.737 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.995 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:07.995 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:07.995 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:07.995 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:07.995 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:08.252 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:15:08.252 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.252 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:08.252 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:08.252 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:08.252 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.252 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.252 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:08.252 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.252 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:08.252 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:08.252 23:52:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:09.183 00:15:09.183 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:09.183 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:09.183 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.183 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.183 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.183 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:09.183 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.183 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:09.183 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.183 { 00:15:09.183 "cntlid": 41, 00:15:09.183 "qid": 0, 00:15:09.183 "state": "enabled", 00:15:09.183 "thread": "nvmf_tgt_poll_group_000", 00:15:09.183 "listen_address": { 00:15:09.183 "trtype": "TCP", 00:15:09.183 "adrfam": "IPv4", 00:15:09.183 "traddr": "10.0.0.2", 00:15:09.183 "trsvcid": "4420" 00:15:09.183 }, 00:15:09.183 "peer_address": { 00:15:09.183 "trtype": "TCP", 00:15:09.183 "adrfam": "IPv4", 00:15:09.183 "traddr": "10.0.0.1", 00:15:09.183 "trsvcid": "51048" 00:15:09.183 }, 00:15:09.183 "auth": { 00:15:09.183 "state": "completed", 00:15:09.183 "digest": "sha256", 00:15:09.183 "dhgroup": "ffdhe8192" 00:15:09.183 } 00:15:09.183 } 00:15:09.183 ]' 00:15:09.183 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.441 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:09.441 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.441 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:09.441 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.441 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.441 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.441 23:52:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.698 23:52:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:15:10.630 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.630 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.630 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.630 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.630 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.630 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.630 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:10.630 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:10.887 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:15:10.887 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.887 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:10.887 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:10.887 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:10.887 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.887 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.887 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:10.887 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.887 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:10.887 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:10.887 23:52:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:11.820 00:15:11.820 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.820 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.820 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:12.110 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:12.110 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:12.110 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:12.110 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.110 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:12.110 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:12.110 { 00:15:12.110 "cntlid": 43, 00:15:12.110 "qid": 0, 00:15:12.110 "state": "enabled", 00:15:12.110 "thread": "nvmf_tgt_poll_group_000", 00:15:12.110 "listen_address": { 00:15:12.110 "trtype": "TCP", 00:15:12.110 "adrfam": "IPv4", 00:15:12.110 "traddr": "10.0.0.2", 00:15:12.110 "trsvcid": "4420" 00:15:12.110 }, 00:15:12.110 "peer_address": { 00:15:12.110 "trtype": "TCP", 00:15:12.110 "adrfam": "IPv4", 00:15:12.110 "traddr": "10.0.0.1", 00:15:12.110 "trsvcid": "45408" 00:15:12.110 }, 00:15:12.110 "auth": { 00:15:12.110 "state": "completed", 00:15:12.110 "digest": "sha256", 00:15:12.110 "dhgroup": "ffdhe8192" 00:15:12.110 } 00:15:12.110 } 00:15:12.110 ]' 00:15:12.110 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:12.110 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:12.110 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:12.110 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:12.110 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:12.368 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.368 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.368 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.368 23:52:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:15:13.739 23:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:13.739 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:13.739 23:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:13.739 23:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.739 23:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.739 23:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.739 23:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:13.739 23:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:13.739 23:52:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:13.739 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:15:13.739 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.739 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:13.739 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:13.739 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:13.739 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.739 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.739 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:13.739 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.740 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:13.740 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:13.740 23:52:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:14.671 00:15:14.671 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:14.671 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:14.671 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:14.928 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:14.928 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:14.928 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:14.928 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.928 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:14.928 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:14.928 { 00:15:14.928 "cntlid": 45, 00:15:14.928 "qid": 0, 00:15:14.928 "state": "enabled", 00:15:14.928 "thread": "nvmf_tgt_poll_group_000", 00:15:14.928 "listen_address": { 00:15:14.928 "trtype": "TCP", 00:15:14.928 "adrfam": "IPv4", 00:15:14.928 "traddr": "10.0.0.2", 00:15:14.928 "trsvcid": "4420" 00:15:14.928 }, 00:15:14.928 "peer_address": { 00:15:14.928 "trtype": "TCP", 00:15:14.928 "adrfam": "IPv4", 00:15:14.928 "traddr": "10.0.0.1", 00:15:14.928 "trsvcid": "45438" 00:15:14.928 }, 00:15:14.928 "auth": { 00:15:14.928 "state": "completed", 00:15:14.928 "digest": "sha256", 00:15:14.928 "dhgroup": "ffdhe8192" 00:15:14.928 } 00:15:14.928 } 00:15:14.928 ]' 00:15:14.928 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:14.928 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:14.928 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:14.928 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:14.928 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:14.928 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:14.928 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:14.929 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.186 23:52:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:15:16.557 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.557 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.557 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.557 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.557 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.557 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:16.557 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:16.557 23:52:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:16.557 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:15:16.557 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:16.557 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:15:16.557 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:16.557 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:16.557 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.557 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:16.557 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:16.557 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.557 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:16.557 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:16.557 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:17.488 00:15:17.488 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.488 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.488 23:52:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.745 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.745 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.745 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:17.745 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.745 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:17.745 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:17.745 { 00:15:17.745 "cntlid": 47, 00:15:17.745 "qid": 0, 00:15:17.745 "state": "enabled", 00:15:17.745 "thread": "nvmf_tgt_poll_group_000", 00:15:17.745 "listen_address": { 00:15:17.745 "trtype": "TCP", 00:15:17.745 "adrfam": "IPv4", 00:15:17.745 "traddr": "10.0.0.2", 00:15:17.745 "trsvcid": "4420" 00:15:17.745 }, 00:15:17.745 "peer_address": { 00:15:17.745 "trtype": "TCP", 00:15:17.745 "adrfam": "IPv4", 00:15:17.745 "traddr": "10.0.0.1", 00:15:17.745 "trsvcid": "45458" 00:15:17.745 }, 00:15:17.745 "auth": { 00:15:17.745 "state": "completed", 00:15:17.745 "digest": "sha256", 00:15:17.745 "dhgroup": "ffdhe8192" 00:15:17.745 } 00:15:17.745 } 00:15:17.745 ]' 00:15:17.745 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:17.745 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.745 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:17.745 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:17.745 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:17.745 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.745 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.745 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.312 23:52:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:15:19.241 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.241 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.241 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.241 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.241 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.242 23:52:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.806 00:15:19.806 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:19.806 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.806 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:19.806 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:19.806 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:19.806 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:19.806 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.806 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:19.806 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:19.806 { 00:15:19.806 "cntlid": 49, 00:15:19.806 "qid": 0, 00:15:19.806 "state": "enabled", 00:15:19.806 "thread": "nvmf_tgt_poll_group_000", 00:15:19.806 "listen_address": { 00:15:19.806 "trtype": "TCP", 00:15:19.806 "adrfam": "IPv4", 00:15:19.806 "traddr": "10.0.0.2", 00:15:19.806 "trsvcid": "4420" 00:15:19.806 }, 00:15:19.806 "peer_address": { 00:15:19.806 "trtype": "TCP", 00:15:19.806 "adrfam": "IPv4", 00:15:19.806 "traddr": "10.0.0.1", 00:15:19.806 "trsvcid": "53810" 00:15:19.806 }, 00:15:19.806 "auth": { 00:15:19.806 "state": "completed", 00:15:19.806 "digest": "sha384", 00:15:19.806 "dhgroup": "null" 00:15:19.806 } 00:15:19.806 } 00:15:19.806 ]' 00:15:19.806 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.064 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.064 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.064 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:20.064 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.064 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.064 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.064 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.321 23:52:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:15:21.252 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.252 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.252 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.252 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.252 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.253 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.253 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:21.253 23:52:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:21.510 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:15:21.510 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:21.510 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:21.510 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:21.510 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:21.510 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.510 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.510 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:21.510 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.510 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:21.510 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.510 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.767 00:15:21.767 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:21.767 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:21.767 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.025 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.025 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.025 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:22.025 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.025 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:22.025 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.025 { 00:15:22.025 "cntlid": 51, 00:15:22.025 "qid": 0, 00:15:22.025 "state": "enabled", 00:15:22.025 "thread": "nvmf_tgt_poll_group_000", 00:15:22.025 "listen_address": { 00:15:22.025 "trtype": "TCP", 00:15:22.025 "adrfam": "IPv4", 00:15:22.025 "traddr": "10.0.0.2", 00:15:22.025 "trsvcid": "4420" 00:15:22.025 }, 00:15:22.025 "peer_address": { 00:15:22.025 "trtype": "TCP", 00:15:22.025 "adrfam": "IPv4", 00:15:22.025 "traddr": "10.0.0.1", 00:15:22.025 "trsvcid": "53836" 00:15:22.025 }, 00:15:22.025 "auth": { 00:15:22.025 "state": "completed", 00:15:22.025 "digest": "sha384", 00:15:22.025 "dhgroup": "null" 00:15:22.025 } 00:15:22.025 } 00:15:22.025 ]' 00:15:22.025 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.025 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.025 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.283 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:22.283 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.283 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.283 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.283 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.540 23:52:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:15:23.472 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.472 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:23.472 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.472 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.472 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.472 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:23.472 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:23.472 23:52:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:23.730 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:15:23.730 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:23.730 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:23.730 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:23.730 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:23.730 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.730 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.730 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:23.730 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.730 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:23.730 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.730 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.989 00:15:23.989 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:23.989 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:23.989 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.247 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.247 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.247 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:24.247 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.247 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:24.247 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:24.247 { 00:15:24.247 "cntlid": 53, 00:15:24.247 "qid": 0, 00:15:24.247 "state": "enabled", 00:15:24.247 "thread": "nvmf_tgt_poll_group_000", 00:15:24.247 "listen_address": { 00:15:24.247 "trtype": "TCP", 00:15:24.247 "adrfam": "IPv4", 00:15:24.247 "traddr": "10.0.0.2", 00:15:24.247 "trsvcid": "4420" 00:15:24.247 }, 00:15:24.248 "peer_address": { 00:15:24.248 "trtype": "TCP", 00:15:24.248 "adrfam": "IPv4", 00:15:24.248 "traddr": "10.0.0.1", 00:15:24.248 "trsvcid": "53862" 00:15:24.248 }, 00:15:24.248 "auth": { 00:15:24.248 "state": "completed", 00:15:24.248 "digest": "sha384", 00:15:24.248 "dhgroup": "null" 00:15:24.248 } 00:15:24.248 } 00:15:24.248 ]' 00:15:24.248 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:24.248 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:24.248 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:24.248 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:24.248 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:24.248 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.248 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.248 23:52:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.506 23:52:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.878 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:25.878 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:26.136 00:15:26.136 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:26.136 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:26.136 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.393 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.393 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.393 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:26.393 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.393 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:26.393 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:26.393 { 00:15:26.393 "cntlid": 55, 00:15:26.393 "qid": 0, 00:15:26.393 "state": "enabled", 00:15:26.393 "thread": "nvmf_tgt_poll_group_000", 00:15:26.393 "listen_address": { 00:15:26.393 "trtype": "TCP", 00:15:26.393 "adrfam": "IPv4", 00:15:26.393 "traddr": "10.0.0.2", 00:15:26.393 "trsvcid": "4420" 00:15:26.393 }, 00:15:26.393 "peer_address": { 00:15:26.393 "trtype": "TCP", 00:15:26.393 "adrfam": "IPv4", 00:15:26.393 "traddr": "10.0.0.1", 00:15:26.393 "trsvcid": "53890" 00:15:26.393 }, 00:15:26.394 "auth": { 00:15:26.394 "state": "completed", 00:15:26.394 "digest": "sha384", 00:15:26.394 "dhgroup": "null" 00:15:26.394 } 00:15:26.394 } 00:15:26.394 ]' 00:15:26.394 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:26.394 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:26.394 23:52:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:26.651 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:26.651 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:26.651 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.651 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.651 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.909 23:52:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:15:27.859 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:27.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:27.859 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:27.859 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:27.859 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.859 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:27.859 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:27.859 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:27.859 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:27.859 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:28.134 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:28.134 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:28.134 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:28.134 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:28.134 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:28.134 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.134 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.134 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.134 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.134 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.134 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.134 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:28.392 00:15:28.392 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:28.392 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:28.392 23:52:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:28.650 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:28.650 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:28.650 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:28.650 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.650 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:28.650 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:28.650 { 00:15:28.650 "cntlid": 57, 00:15:28.650 "qid": 0, 00:15:28.650 "state": "enabled", 00:15:28.650 "thread": "nvmf_tgt_poll_group_000", 00:15:28.650 "listen_address": { 00:15:28.650 "trtype": "TCP", 00:15:28.650 "adrfam": "IPv4", 00:15:28.650 "traddr": "10.0.0.2", 00:15:28.650 "trsvcid": "4420" 00:15:28.650 }, 00:15:28.650 "peer_address": { 00:15:28.650 "trtype": "TCP", 00:15:28.650 "adrfam": "IPv4", 00:15:28.650 "traddr": "10.0.0.1", 00:15:28.650 "trsvcid": "53934" 00:15:28.650 }, 00:15:28.650 "auth": { 00:15:28.650 "state": "completed", 00:15:28.650 "digest": "sha384", 00:15:28.650 "dhgroup": "ffdhe2048" 00:15:28.650 } 00:15:28.650 } 00:15:28.650 ]' 00:15:28.650 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:28.650 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:28.650 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:28.907 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:28.907 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:28.907 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:28.907 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:28.908 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.165 23:52:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:15:30.098 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.098 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.098 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.098 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.098 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.099 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.099 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.099 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:30.099 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:30.356 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:30.356 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:30.356 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:30.356 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:30.356 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:30.356 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.356 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.356 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.356 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.356 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.356 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.357 23:53:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:30.921 00:15:30.921 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:30.921 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:30.921 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.921 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.921 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.921 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.921 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.921 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.921 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:30.921 { 00:15:30.921 "cntlid": 59, 00:15:30.921 "qid": 0, 00:15:30.921 "state": "enabled", 00:15:30.921 "thread": "nvmf_tgt_poll_group_000", 00:15:30.921 "listen_address": { 00:15:30.921 "trtype": "TCP", 00:15:30.921 "adrfam": "IPv4", 00:15:30.921 "traddr": "10.0.0.2", 00:15:30.921 "trsvcid": "4420" 00:15:30.921 }, 00:15:30.921 "peer_address": { 00:15:30.921 "trtype": "TCP", 00:15:30.921 "adrfam": "IPv4", 00:15:30.921 "traddr": "10.0.0.1", 00:15:30.921 "trsvcid": "38896" 00:15:30.921 }, 00:15:30.921 "auth": { 00:15:30.921 "state": "completed", 00:15:30.921 "digest": "sha384", 00:15:30.921 "dhgroup": "ffdhe2048" 00:15:30.921 } 00:15:30.921 } 00:15:30.921 ]' 00:15:30.921 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:31.179 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:31.179 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:31.179 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:31.179 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:31.179 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.179 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.179 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.436 23:53:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:15:32.367 23:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.367 23:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:32.367 23:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.367 23:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.367 23:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.367 23:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:32.367 23:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:32.367 23:53:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:32.624 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:32.624 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:32.624 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:32.624 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:32.624 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:32.625 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.625 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.625 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:32.625 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.625 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:32.625 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:32.625 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:33.190 00:15:33.190 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:33.190 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:33.190 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.190 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.190 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.190 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:33.190 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.190 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:33.447 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:33.447 { 00:15:33.447 "cntlid": 61, 00:15:33.447 "qid": 0, 00:15:33.447 "state": "enabled", 00:15:33.447 "thread": "nvmf_tgt_poll_group_000", 00:15:33.447 "listen_address": { 00:15:33.447 "trtype": "TCP", 00:15:33.447 "adrfam": "IPv4", 00:15:33.447 "traddr": "10.0.0.2", 00:15:33.447 "trsvcid": "4420" 00:15:33.447 }, 00:15:33.447 "peer_address": { 00:15:33.447 "trtype": "TCP", 00:15:33.447 "adrfam": "IPv4", 00:15:33.447 "traddr": "10.0.0.1", 00:15:33.447 "trsvcid": "38918" 00:15:33.447 }, 00:15:33.447 "auth": { 00:15:33.447 "state": "completed", 00:15:33.447 "digest": "sha384", 00:15:33.447 "dhgroup": "ffdhe2048" 00:15:33.447 } 00:15:33.447 } 00:15:33.447 ]' 00:15:33.447 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:33.447 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:33.447 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:33.447 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:33.447 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:33.447 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.447 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.447 23:53:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:33.705 23:53:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:15:34.637 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.637 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.637 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.637 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.637 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.637 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:34.637 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:34.637 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:34.895 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:34.895 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:34.895 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:34.895 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:34.895 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:34.895 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:34.895 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:34.895 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:34.895 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.895 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:34.895 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:34.895 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:35.151 00:15:35.151 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:35.151 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:35.151 23:53:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.408 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.409 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.409 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.409 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.409 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.409 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:35.409 { 00:15:35.409 "cntlid": 63, 00:15:35.409 "qid": 0, 00:15:35.409 "state": "enabled", 00:15:35.409 "thread": "nvmf_tgt_poll_group_000", 00:15:35.409 "listen_address": { 00:15:35.409 "trtype": "TCP", 00:15:35.409 "adrfam": "IPv4", 00:15:35.409 "traddr": "10.0.0.2", 00:15:35.409 "trsvcid": "4420" 00:15:35.409 }, 00:15:35.409 "peer_address": { 00:15:35.409 "trtype": "TCP", 00:15:35.409 "adrfam": "IPv4", 00:15:35.409 "traddr": "10.0.0.1", 00:15:35.409 "trsvcid": "38958" 00:15:35.409 }, 00:15:35.409 "auth": { 00:15:35.409 "state": "completed", 00:15:35.409 "digest": "sha384", 00:15:35.409 "dhgroup": "ffdhe2048" 00:15:35.409 } 00:15:35.409 } 00:15:35.409 ]' 00:15:35.666 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:35.666 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.666 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:35.666 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:35.666 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:35.666 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.666 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.666 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.924 23:53:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:15:36.856 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.856 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.856 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:36.856 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:36.856 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.856 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:36.856 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:36.856 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.856 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:36.856 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:37.113 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:37.113 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:37.113 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:37.113 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:37.113 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:37.113 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.113 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.113 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.113 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.113 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.113 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.113 23:53:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:37.678 00:15:37.678 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:37.678 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:37.678 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.678 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.679 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.679 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:37.679 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.679 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:37.679 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.679 { 00:15:37.679 "cntlid": 65, 00:15:37.679 "qid": 0, 00:15:37.679 "state": "enabled", 00:15:37.679 "thread": "nvmf_tgt_poll_group_000", 00:15:37.679 "listen_address": { 00:15:37.679 "trtype": "TCP", 00:15:37.679 "adrfam": "IPv4", 00:15:37.679 "traddr": "10.0.0.2", 00:15:37.679 "trsvcid": "4420" 00:15:37.679 }, 00:15:37.679 "peer_address": { 00:15:37.679 "trtype": "TCP", 00:15:37.679 "adrfam": "IPv4", 00:15:37.679 "traddr": "10.0.0.1", 00:15:37.679 "trsvcid": "38994" 00:15:37.679 }, 00:15:37.679 "auth": { 00:15:37.679 "state": "completed", 00:15:37.679 "digest": "sha384", 00:15:37.679 "dhgroup": "ffdhe3072" 00:15:37.679 } 00:15:37.679 } 00:15:37.679 ]' 00:15:37.679 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.936 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.936 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.936 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:37.936 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.936 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.936 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.936 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.193 23:53:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:15:39.126 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.126 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:39.126 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.126 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.126 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.126 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:39.126 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:39.126 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:39.383 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:39.383 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:39.383 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:39.383 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:39.383 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:39.383 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.383 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.383 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.383 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.383 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.383 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.383 23:53:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:39.640 00:15:39.640 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.640 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.640 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.898 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.898 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.898 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:39.898 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.898 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:39.898 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.898 { 00:15:39.898 "cntlid": 67, 00:15:39.898 "qid": 0, 00:15:39.898 "state": "enabled", 00:15:39.898 "thread": "nvmf_tgt_poll_group_000", 00:15:39.898 "listen_address": { 00:15:39.898 "trtype": "TCP", 00:15:39.898 "adrfam": "IPv4", 00:15:39.898 "traddr": "10.0.0.2", 00:15:39.898 "trsvcid": "4420" 00:15:39.898 }, 00:15:39.898 "peer_address": { 00:15:39.898 "trtype": "TCP", 00:15:39.898 "adrfam": "IPv4", 00:15:39.898 "traddr": "10.0.0.1", 00:15:39.898 "trsvcid": "47678" 00:15:39.898 }, 00:15:39.898 "auth": { 00:15:39.898 "state": "completed", 00:15:39.898 "digest": "sha384", 00:15:39.898 "dhgroup": "ffdhe3072" 00:15:39.898 } 00:15:39.898 } 00:15:39.898 ]' 00:15:39.898 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.898 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.898 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.898 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:39.898 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:40.155 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.155 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.155 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.412 23:53:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:15:41.344 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.344 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:41.344 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.344 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.344 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.344 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:41.344 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:41.344 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:41.344 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:41.344 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.344 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.345 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:41.345 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:41.345 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.345 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.345 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.345 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.601 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.601 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.601 23:53:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:41.858 00:15:41.858 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:41.858 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.858 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.115 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.115 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.115 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:42.115 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.115 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:42.115 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.115 { 00:15:42.115 "cntlid": 69, 00:15:42.115 "qid": 0, 00:15:42.115 "state": "enabled", 00:15:42.115 "thread": "nvmf_tgt_poll_group_000", 00:15:42.115 "listen_address": { 00:15:42.115 "trtype": "TCP", 00:15:42.115 "adrfam": "IPv4", 00:15:42.115 "traddr": "10.0.0.2", 00:15:42.115 "trsvcid": "4420" 00:15:42.115 }, 00:15:42.115 "peer_address": { 00:15:42.115 "trtype": "TCP", 00:15:42.115 "adrfam": "IPv4", 00:15:42.115 "traddr": "10.0.0.1", 00:15:42.115 "trsvcid": "47714" 00:15:42.115 }, 00:15:42.115 "auth": { 00:15:42.115 "state": "completed", 00:15:42.115 "digest": "sha384", 00:15:42.115 "dhgroup": "ffdhe3072" 00:15:42.115 } 00:15:42.115 } 00:15:42.115 ]' 00:15:42.115 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.115 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.115 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.115 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:42.115 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.115 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.115 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.115 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.373 23:53:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:15:43.336 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.336 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.336 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.336 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.336 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.336 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.336 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.336 23:53:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.593 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:43.593 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:43.593 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:43.593 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:43.593 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:43.593 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.593 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:43.593 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:43.593 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.593 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:43.593 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:43.593 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:44.158 00:15:44.158 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:44.158 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.158 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:44.158 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.158 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.158 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:44.158 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.158 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:44.158 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:44.158 { 00:15:44.158 "cntlid": 71, 00:15:44.158 "qid": 0, 00:15:44.158 "state": "enabled", 00:15:44.158 "thread": "nvmf_tgt_poll_group_000", 00:15:44.158 "listen_address": { 00:15:44.158 "trtype": "TCP", 00:15:44.158 "adrfam": "IPv4", 00:15:44.158 "traddr": "10.0.0.2", 00:15:44.158 "trsvcid": "4420" 00:15:44.158 }, 00:15:44.158 "peer_address": { 00:15:44.158 "trtype": "TCP", 00:15:44.158 "adrfam": "IPv4", 00:15:44.158 "traddr": "10.0.0.1", 00:15:44.158 "trsvcid": "47728" 00:15:44.158 }, 00:15:44.158 "auth": { 00:15:44.158 "state": "completed", 00:15:44.158 "digest": "sha384", 00:15:44.158 "dhgroup": "ffdhe3072" 00:15:44.158 } 00:15:44.158 } 00:15:44.158 ]' 00:15:44.158 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:44.415 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:44.415 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:44.415 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:44.415 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:44.415 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.415 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.415 23:53:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.673 23:53:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:15:45.604 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.604 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:45.604 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.604 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.604 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.605 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:45.605 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:45.605 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:45.605 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:45.862 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:45.862 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:45.862 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:45.862 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:45.862 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:45.862 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.862 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.862 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:45.862 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.862 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:45.862 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:45.862 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.119 00:15:46.377 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:46.377 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:46.377 23:53:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.635 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.635 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.635 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:46.635 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.635 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:46.635 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:46.635 { 00:15:46.635 "cntlid": 73, 00:15:46.635 "qid": 0, 00:15:46.635 "state": "enabled", 00:15:46.635 "thread": "nvmf_tgt_poll_group_000", 00:15:46.635 "listen_address": { 00:15:46.635 "trtype": "TCP", 00:15:46.635 "adrfam": "IPv4", 00:15:46.635 "traddr": "10.0.0.2", 00:15:46.635 "trsvcid": "4420" 00:15:46.635 }, 00:15:46.635 "peer_address": { 00:15:46.635 "trtype": "TCP", 00:15:46.635 "adrfam": "IPv4", 00:15:46.635 "traddr": "10.0.0.1", 00:15:46.635 "trsvcid": "47758" 00:15:46.635 }, 00:15:46.635 "auth": { 00:15:46.635 "state": "completed", 00:15:46.635 "digest": "sha384", 00:15:46.635 "dhgroup": "ffdhe4096" 00:15:46.635 } 00:15:46.635 } 00:15:46.635 ]' 00:15:46.635 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:46.635 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.635 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:46.635 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:46.635 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:46.635 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.635 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.635 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.893 23:53:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:15:47.824 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.824 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:47.824 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.824 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.824 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.824 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:47.824 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:47.824 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:48.082 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:48.082 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:48.082 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:48.082 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:48.082 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:48.082 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.082 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.082 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.082 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.082 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.082 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.082 23:53:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.647 00:15:48.647 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:48.647 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.647 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:48.905 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.905 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.905 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:48.905 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.905 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:48.905 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:48.905 { 00:15:48.905 "cntlid": 75, 00:15:48.905 "qid": 0, 00:15:48.905 "state": "enabled", 00:15:48.905 "thread": "nvmf_tgt_poll_group_000", 00:15:48.905 "listen_address": { 00:15:48.905 "trtype": "TCP", 00:15:48.905 "adrfam": "IPv4", 00:15:48.905 "traddr": "10.0.0.2", 00:15:48.905 "trsvcid": "4420" 00:15:48.905 }, 00:15:48.905 "peer_address": { 00:15:48.905 "trtype": "TCP", 00:15:48.905 "adrfam": "IPv4", 00:15:48.905 "traddr": "10.0.0.1", 00:15:48.905 "trsvcid": "47790" 00:15:48.905 }, 00:15:48.905 "auth": { 00:15:48.905 "state": "completed", 00:15:48.905 "digest": "sha384", 00:15:48.905 "dhgroup": "ffdhe4096" 00:15:48.905 } 00:15:48.905 } 00:15:48.905 ]' 00:15:48.905 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:48.905 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.905 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.905 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:48.905 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.905 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.905 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.905 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.163 23:53:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:15:50.094 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.094 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:50.094 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.094 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.094 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.094 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:50.094 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:50.094 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:50.352 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:50.352 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:50.352 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:50.352 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:50.352 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:50.352 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.352 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.352 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:50.352 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.352 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:50.352 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.352 23:53:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:50.926 00:15:50.926 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.926 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.926 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.187 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.187 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.187 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:51.187 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.187 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:51.187 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:51.187 { 00:15:51.187 "cntlid": 77, 00:15:51.187 "qid": 0, 00:15:51.187 "state": "enabled", 00:15:51.187 "thread": "nvmf_tgt_poll_group_000", 00:15:51.187 "listen_address": { 00:15:51.187 "trtype": "TCP", 00:15:51.187 "adrfam": "IPv4", 00:15:51.187 "traddr": "10.0.0.2", 00:15:51.187 "trsvcid": "4420" 00:15:51.187 }, 00:15:51.187 "peer_address": { 00:15:51.187 "trtype": "TCP", 00:15:51.187 "adrfam": "IPv4", 00:15:51.187 "traddr": "10.0.0.1", 00:15:51.187 "trsvcid": "33122" 00:15:51.187 }, 00:15:51.187 "auth": { 00:15:51.187 "state": "completed", 00:15:51.187 "digest": "sha384", 00:15:51.187 "dhgroup": "ffdhe4096" 00:15:51.187 } 00:15:51.187 } 00:15:51.187 ]' 00:15:51.187 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:51.187 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.187 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:51.187 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:51.187 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:51.187 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.187 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.187 23:53:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.444 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:15:52.376 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.376 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.376 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.376 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.376 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.376 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.376 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:52.376 23:53:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:52.941 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:52.941 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.941 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:52.941 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:52.941 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:52.941 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.941 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:52.941 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:52.941 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.941 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:52.941 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:52.941 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:53.199 00:15:53.199 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:53.199 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.199 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.456 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.456 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.456 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:53.456 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.456 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:53.456 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.456 { 00:15:53.456 "cntlid": 79, 00:15:53.456 "qid": 0, 00:15:53.456 "state": "enabled", 00:15:53.456 "thread": "nvmf_tgt_poll_group_000", 00:15:53.456 "listen_address": { 00:15:53.456 "trtype": "TCP", 00:15:53.456 "adrfam": "IPv4", 00:15:53.456 "traddr": "10.0.0.2", 00:15:53.456 "trsvcid": "4420" 00:15:53.456 }, 00:15:53.456 "peer_address": { 00:15:53.456 "trtype": "TCP", 00:15:53.456 "adrfam": "IPv4", 00:15:53.457 "traddr": "10.0.0.1", 00:15:53.457 "trsvcid": "33148" 00:15:53.457 }, 00:15:53.457 "auth": { 00:15:53.457 "state": "completed", 00:15:53.457 "digest": "sha384", 00:15:53.457 "dhgroup": "ffdhe4096" 00:15:53.457 } 00:15:53.457 } 00:15:53.457 ]' 00:15:53.457 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.457 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.457 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.457 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:53.457 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.457 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.457 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.457 23:53:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.714 23:53:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:15:54.645 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.645 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.645 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.645 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.645 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.645 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.645 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:54.645 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.645 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:54.645 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:54.903 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:54.903 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.903 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:54.903 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:54.903 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:54.903 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.903 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.903 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:54.903 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.903 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:54.903 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:54.903 23:53:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:55.468 00:15:55.468 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.468 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.468 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.725 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.725 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.725 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:55.725 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.725 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:55.725 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.725 { 00:15:55.725 "cntlid": 81, 00:15:55.725 "qid": 0, 00:15:55.725 "state": "enabled", 00:15:55.725 "thread": "nvmf_tgt_poll_group_000", 00:15:55.725 "listen_address": { 00:15:55.725 "trtype": "TCP", 00:15:55.725 "adrfam": "IPv4", 00:15:55.725 "traddr": "10.0.0.2", 00:15:55.725 "trsvcid": "4420" 00:15:55.725 }, 00:15:55.725 "peer_address": { 00:15:55.725 "trtype": "TCP", 00:15:55.725 "adrfam": "IPv4", 00:15:55.725 "traddr": "10.0.0.1", 00:15:55.725 "trsvcid": "33178" 00:15:55.725 }, 00:15:55.725 "auth": { 00:15:55.725 "state": "completed", 00:15:55.725 "digest": "sha384", 00:15:55.725 "dhgroup": "ffdhe6144" 00:15:55.725 } 00:15:55.725 } 00:15:55.725 ]' 00:15:55.725 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.725 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.725 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.983 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:55.983 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.983 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.983 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.983 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.240 23:53:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:15:57.173 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:57.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:57.173 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:57.173 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.173 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.173 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.173 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:57.173 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:57.173 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:57.430 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:57.430 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:57.430 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:57.430 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:57.430 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:57.430 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:57.430 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.430 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:57.430 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.430 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:57.430 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:57.430 23:53:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.003 00:15:58.003 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:58.003 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:58.003 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.336 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.336 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.336 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.336 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.336 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.336 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:58.336 { 00:15:58.336 "cntlid": 83, 00:15:58.336 "qid": 0, 00:15:58.336 "state": "enabled", 00:15:58.336 "thread": "nvmf_tgt_poll_group_000", 00:15:58.336 "listen_address": { 00:15:58.336 "trtype": "TCP", 00:15:58.336 "adrfam": "IPv4", 00:15:58.336 "traddr": "10.0.0.2", 00:15:58.336 "trsvcid": "4420" 00:15:58.336 }, 00:15:58.336 "peer_address": { 00:15:58.336 "trtype": "TCP", 00:15:58.336 "adrfam": "IPv4", 00:15:58.336 "traddr": "10.0.0.1", 00:15:58.336 "trsvcid": "33216" 00:15:58.336 }, 00:15:58.336 "auth": { 00:15:58.336 "state": "completed", 00:15:58.336 "digest": "sha384", 00:15:58.336 "dhgroup": "ffdhe6144" 00:15:58.336 } 00:15:58.336 } 00:15:58.336 ]' 00:15:58.336 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:58.336 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.336 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:58.336 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:58.336 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:58.336 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.336 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.336 23:53:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:58.593 23:53:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:15:59.525 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.525 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:59.525 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.525 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.525 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.525 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:59.525 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:59.525 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:59.783 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:59.783 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.783 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:59.783 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:59.783 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:59.783 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.783 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.783 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:59.783 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.783 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:59.783 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.783 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.348 00:16:00.348 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:00.348 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:00.348 23:53:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.607 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.607 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.607 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:00.607 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.607 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:00.607 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:00.607 { 00:16:00.607 "cntlid": 85, 00:16:00.607 "qid": 0, 00:16:00.607 "state": "enabled", 00:16:00.607 "thread": "nvmf_tgt_poll_group_000", 00:16:00.607 "listen_address": { 00:16:00.607 "trtype": "TCP", 00:16:00.607 "adrfam": "IPv4", 00:16:00.607 "traddr": "10.0.0.2", 00:16:00.607 "trsvcid": "4420" 00:16:00.607 }, 00:16:00.607 "peer_address": { 00:16:00.607 "trtype": "TCP", 00:16:00.607 "adrfam": "IPv4", 00:16:00.607 "traddr": "10.0.0.1", 00:16:00.607 "trsvcid": "46576" 00:16:00.607 }, 00:16:00.607 "auth": { 00:16:00.607 "state": "completed", 00:16:00.607 "digest": "sha384", 00:16:00.607 "dhgroup": "ffdhe6144" 00:16:00.607 } 00:16:00.607 } 00:16:00.607 ]' 00:16:00.607 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:00.865 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.865 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:00.865 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:00.865 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:00.865 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.865 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.865 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:01.124 23:53:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:16:02.069 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:02.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:02.069 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:02.069 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.069 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.069 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.069 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:02.069 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:02.069 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:02.327 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:16:02.327 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:02.327 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:02.327 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:02.327 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:02.327 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.327 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:02.327 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.327 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.327 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.327 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.327 23:53:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:02.892 00:16:02.892 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:02.892 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.892 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.150 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.150 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.150 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:03.150 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.150 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:03.150 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:03.150 { 00:16:03.150 "cntlid": 87, 00:16:03.150 "qid": 0, 00:16:03.150 "state": "enabled", 00:16:03.150 "thread": "nvmf_tgt_poll_group_000", 00:16:03.150 "listen_address": { 00:16:03.150 "trtype": "TCP", 00:16:03.150 "adrfam": "IPv4", 00:16:03.150 "traddr": "10.0.0.2", 00:16:03.150 "trsvcid": "4420" 00:16:03.150 }, 00:16:03.150 "peer_address": { 00:16:03.150 "trtype": "TCP", 00:16:03.150 "adrfam": "IPv4", 00:16:03.150 "traddr": "10.0.0.1", 00:16:03.150 "trsvcid": "46600" 00:16:03.150 }, 00:16:03.150 "auth": { 00:16:03.150 "state": "completed", 00:16:03.150 "digest": "sha384", 00:16:03.150 "dhgroup": "ffdhe6144" 00:16:03.150 } 00:16:03.150 } 00:16:03.150 ]' 00:16:03.150 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:03.407 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:03.407 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:03.407 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:03.407 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:03.407 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.407 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.407 23:53:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.664 23:53:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:16:04.596 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.596 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:04.596 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.596 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.596 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.596 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:04.596 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:04.596 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:04.596 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:04.854 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:16:04.854 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:04.854 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:04.854 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:04.854 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:04.854 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.854 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.854 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:04.854 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.854 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:04.854 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:04.854 23:53:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:05.786 00:16:05.786 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:05.786 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:05.786 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.044 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.044 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.044 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:06.044 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.044 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:06.044 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.044 { 00:16:06.044 "cntlid": 89, 00:16:06.044 "qid": 0, 00:16:06.044 "state": "enabled", 00:16:06.044 "thread": "nvmf_tgt_poll_group_000", 00:16:06.044 "listen_address": { 00:16:06.044 "trtype": "TCP", 00:16:06.044 "adrfam": "IPv4", 00:16:06.044 "traddr": "10.0.0.2", 00:16:06.044 "trsvcid": "4420" 00:16:06.044 }, 00:16:06.044 "peer_address": { 00:16:06.044 "trtype": "TCP", 00:16:06.044 "adrfam": "IPv4", 00:16:06.044 "traddr": "10.0.0.1", 00:16:06.044 "trsvcid": "46628" 00:16:06.044 }, 00:16:06.044 "auth": { 00:16:06.044 "state": "completed", 00:16:06.044 "digest": "sha384", 00:16:06.044 "dhgroup": "ffdhe8192" 00:16:06.044 } 00:16:06.044 } 00:16:06.044 ]' 00:16:06.044 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.044 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.044 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.044 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:06.044 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.044 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.044 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.044 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.302 23:53:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:16:07.234 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.234 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.234 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.234 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.234 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.234 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.234 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:07.234 23:53:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:07.491 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:16:07.491 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:07.491 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:07.491 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:07.491 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:07.491 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.491 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.491 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.491 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.748 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.748 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:07.748 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.680 00:16:08.680 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.680 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.680 23:53:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.680 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.680 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.680 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:08.680 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.680 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:08.680 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.680 { 00:16:08.680 "cntlid": 91, 00:16:08.680 "qid": 0, 00:16:08.680 "state": "enabled", 00:16:08.680 "thread": "nvmf_tgt_poll_group_000", 00:16:08.680 "listen_address": { 00:16:08.680 "trtype": "TCP", 00:16:08.680 "adrfam": "IPv4", 00:16:08.680 "traddr": "10.0.0.2", 00:16:08.680 "trsvcid": "4420" 00:16:08.680 }, 00:16:08.680 "peer_address": { 00:16:08.680 "trtype": "TCP", 00:16:08.680 "adrfam": "IPv4", 00:16:08.680 "traddr": "10.0.0.1", 00:16:08.680 "trsvcid": "46662" 00:16:08.680 }, 00:16:08.680 "auth": { 00:16:08.680 "state": "completed", 00:16:08.680 "digest": "sha384", 00:16:08.680 "dhgroup": "ffdhe8192" 00:16:08.680 } 00:16:08.680 } 00:16:08.680 ]' 00:16:08.680 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.680 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:08.680 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.939 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:08.939 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.939 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.939 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.939 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.198 23:53:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:16:10.130 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.130 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.130 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.130 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.130 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.130 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.130 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:10.130 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.130 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:10.387 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:16:10.387 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.387 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:10.387 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:10.387 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:10.387 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.387 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.387 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.387 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.387 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.387 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.387 23:53:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:11.319 00:16:11.319 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:11.319 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:11.319 23:53:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.577 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.577 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.577 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:11.577 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.577 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:11.577 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:11.577 { 00:16:11.577 "cntlid": 93, 00:16:11.577 "qid": 0, 00:16:11.577 "state": "enabled", 00:16:11.577 "thread": "nvmf_tgt_poll_group_000", 00:16:11.577 "listen_address": { 00:16:11.577 "trtype": "TCP", 00:16:11.577 "adrfam": "IPv4", 00:16:11.577 "traddr": "10.0.0.2", 00:16:11.577 "trsvcid": "4420" 00:16:11.577 }, 00:16:11.577 "peer_address": { 00:16:11.577 "trtype": "TCP", 00:16:11.577 "adrfam": "IPv4", 00:16:11.577 "traddr": "10.0.0.1", 00:16:11.577 "trsvcid": "36228" 00:16:11.577 }, 00:16:11.577 "auth": { 00:16:11.577 "state": "completed", 00:16:11.577 "digest": "sha384", 00:16:11.577 "dhgroup": "ffdhe8192" 00:16:11.577 } 00:16:11.577 } 00:16:11.577 ]' 00:16:11.577 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:11.577 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:11.577 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.577 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:11.577 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.577 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.577 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.577 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.834 23:53:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:13.205 23:53:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:14.171 00:16:14.171 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.171 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.171 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:14.428 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.428 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.428 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.428 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.428 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.428 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:14.428 { 00:16:14.428 "cntlid": 95, 00:16:14.428 "qid": 0, 00:16:14.428 "state": "enabled", 00:16:14.428 "thread": "nvmf_tgt_poll_group_000", 00:16:14.428 "listen_address": { 00:16:14.428 "trtype": "TCP", 00:16:14.428 "adrfam": "IPv4", 00:16:14.428 "traddr": "10.0.0.2", 00:16:14.428 "trsvcid": "4420" 00:16:14.428 }, 00:16:14.428 "peer_address": { 00:16:14.428 "trtype": "TCP", 00:16:14.428 "adrfam": "IPv4", 00:16:14.428 "traddr": "10.0.0.1", 00:16:14.428 "trsvcid": "36254" 00:16:14.428 }, 00:16:14.428 "auth": { 00:16:14.428 "state": "completed", 00:16:14.428 "digest": "sha384", 00:16:14.428 "dhgroup": "ffdhe8192" 00:16:14.428 } 00:16:14.428 } 00:16:14.428 ]' 00:16:14.428 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:14.428 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:14.428 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:14.428 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:14.428 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:14.428 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.428 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.428 23:53:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.685 23:53:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:16:15.617 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.617 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.617 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.617 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.617 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.617 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:15.617 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.617 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:15.617 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:15.617 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:15.875 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:16:15.875 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.875 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:15.875 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:15.875 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:15.875 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.875 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.875 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:15.875 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.875 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:15.875 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.875 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.439 00:16:16.439 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:16.439 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.439 23:53:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:16.696 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.696 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.696 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.696 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.696 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.696 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.696 { 00:16:16.696 "cntlid": 97, 00:16:16.696 "qid": 0, 00:16:16.696 "state": "enabled", 00:16:16.696 "thread": "nvmf_tgt_poll_group_000", 00:16:16.696 "listen_address": { 00:16:16.696 "trtype": "TCP", 00:16:16.696 "adrfam": "IPv4", 00:16:16.696 "traddr": "10.0.0.2", 00:16:16.696 "trsvcid": "4420" 00:16:16.696 }, 00:16:16.696 "peer_address": { 00:16:16.696 "trtype": "TCP", 00:16:16.696 "adrfam": "IPv4", 00:16:16.696 "traddr": "10.0.0.1", 00:16:16.696 "trsvcid": "36286" 00:16:16.696 }, 00:16:16.696 "auth": { 00:16:16.697 "state": "completed", 00:16:16.697 "digest": "sha512", 00:16:16.697 "dhgroup": "null" 00:16:16.697 } 00:16:16.697 } 00:16:16.697 ]' 00:16:16.697 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.697 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:16.697 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.697 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:16.697 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.697 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.697 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.697 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.953 23:53:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:16:17.885 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.885 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:17.885 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:17.885 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.885 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:17.885 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:17.885 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:17.885 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:18.143 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:16:18.143 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.143 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:18.143 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:18.143 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:18.143 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.143 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.143 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.143 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.143 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.143 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.143 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.400 00:16:18.400 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.400 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.400 23:53:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.657 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.657 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.657 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:18.657 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.657 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:18.657 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.657 { 00:16:18.657 "cntlid": 99, 00:16:18.657 "qid": 0, 00:16:18.657 "state": "enabled", 00:16:18.657 "thread": "nvmf_tgt_poll_group_000", 00:16:18.657 "listen_address": { 00:16:18.657 "trtype": "TCP", 00:16:18.657 "adrfam": "IPv4", 00:16:18.657 "traddr": "10.0.0.2", 00:16:18.657 "trsvcid": "4420" 00:16:18.657 }, 00:16:18.657 "peer_address": { 00:16:18.657 "trtype": "TCP", 00:16:18.657 "adrfam": "IPv4", 00:16:18.657 "traddr": "10.0.0.1", 00:16:18.657 "trsvcid": "36318" 00:16:18.657 }, 00:16:18.657 "auth": { 00:16:18.657 "state": "completed", 00:16:18.657 "digest": "sha512", 00:16:18.657 "dhgroup": "null" 00:16:18.657 } 00:16:18.657 } 00:16:18.657 ]' 00:16:18.657 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:18.914 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.914 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:18.914 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:18.914 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:18.914 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.914 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.914 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.171 23:53:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:16:20.104 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.104 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:20.104 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.104 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.104 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.104 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.104 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:20.104 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:20.361 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:16:20.362 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.362 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:20.362 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:20.362 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:20.362 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.362 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.362 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.362 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.362 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.362 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.362 23:53:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.927 00:16:20.927 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:20.927 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:20.927 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.927 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.927 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.927 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.927 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.927 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.927 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:20.927 { 00:16:20.927 "cntlid": 101, 00:16:20.927 "qid": 0, 00:16:20.927 "state": "enabled", 00:16:20.927 "thread": "nvmf_tgt_poll_group_000", 00:16:20.927 "listen_address": { 00:16:20.927 "trtype": "TCP", 00:16:20.927 "adrfam": "IPv4", 00:16:20.927 "traddr": "10.0.0.2", 00:16:20.927 "trsvcid": "4420" 00:16:20.927 }, 00:16:20.927 "peer_address": { 00:16:20.927 "trtype": "TCP", 00:16:20.927 "adrfam": "IPv4", 00:16:20.927 "traddr": "10.0.0.1", 00:16:20.927 "trsvcid": "58290" 00:16:20.927 }, 00:16:20.927 "auth": { 00:16:20.927 "state": "completed", 00:16:20.927 "digest": "sha512", 00:16:20.927 "dhgroup": "null" 00:16:20.927 } 00:16:20.927 } 00:16:20.927 ]' 00:16:20.927 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.184 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:21.184 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.184 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:21.184 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.184 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.184 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.184 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.441 23:53:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:16:22.372 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.372 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:22.372 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.372 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.372 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.372 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:22.372 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:22.372 23:53:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:22.629 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:16:22.629 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:22.629 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:22.629 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:22.629 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:22.629 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.629 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:22.629 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:22.629 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.629 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:22.629 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.629 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:22.886 00:16:22.886 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:22.886 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:22.886 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.143 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.143 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.143 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:23.143 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.143 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:23.143 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:23.143 { 00:16:23.143 "cntlid": 103, 00:16:23.143 "qid": 0, 00:16:23.143 "state": "enabled", 00:16:23.143 "thread": "nvmf_tgt_poll_group_000", 00:16:23.143 "listen_address": { 00:16:23.143 "trtype": "TCP", 00:16:23.143 "adrfam": "IPv4", 00:16:23.143 "traddr": "10.0.0.2", 00:16:23.143 "trsvcid": "4420" 00:16:23.143 }, 00:16:23.143 "peer_address": { 00:16:23.143 "trtype": "TCP", 00:16:23.143 "adrfam": "IPv4", 00:16:23.143 "traddr": "10.0.0.1", 00:16:23.143 "trsvcid": "58330" 00:16:23.143 }, 00:16:23.143 "auth": { 00:16:23.143 "state": "completed", 00:16:23.143 "digest": "sha512", 00:16:23.143 "dhgroup": "null" 00:16:23.143 } 00:16:23.143 } 00:16:23.143 ]' 00:16:23.143 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:23.143 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.143 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:23.400 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:23.400 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:23.400 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.400 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.400 23:53:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.657 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:16:24.588 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.588 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:24.588 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.588 23:53:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.588 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.588 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:24.588 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:24.588 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:24.588 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:24.845 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:24.845 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:24.845 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:24.845 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:24.845 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:24.845 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.845 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.846 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:24.846 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.846 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:24.846 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:24.846 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:25.103 00:16:25.103 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:25.103 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.103 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:25.360 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.360 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.360 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.360 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.360 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.360 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:25.360 { 00:16:25.360 "cntlid": 105, 00:16:25.360 "qid": 0, 00:16:25.360 "state": "enabled", 00:16:25.360 "thread": "nvmf_tgt_poll_group_000", 00:16:25.360 "listen_address": { 00:16:25.360 "trtype": "TCP", 00:16:25.360 "adrfam": "IPv4", 00:16:25.360 "traddr": "10.0.0.2", 00:16:25.360 "trsvcid": "4420" 00:16:25.360 }, 00:16:25.360 "peer_address": { 00:16:25.360 "trtype": "TCP", 00:16:25.360 "adrfam": "IPv4", 00:16:25.360 "traddr": "10.0.0.1", 00:16:25.360 "trsvcid": "58340" 00:16:25.360 }, 00:16:25.360 "auth": { 00:16:25.360 "state": "completed", 00:16:25.360 "digest": "sha512", 00:16:25.360 "dhgroup": "ffdhe2048" 00:16:25.360 } 00:16:25.360 } 00:16:25.360 ]' 00:16:25.360 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:25.360 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.360 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:25.360 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:25.360 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:25.617 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.617 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.617 23:53:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.875 23:53:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:16:26.807 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.807 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:26.807 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.807 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.807 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.807 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.807 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:26.807 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:27.063 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:27.063 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.063 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:27.063 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:27.063 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:27.063 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.063 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.064 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.064 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.064 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.064 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.064 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.321 00:16:27.321 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.321 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.321 23:53:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.578 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.578 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.578 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.578 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.578 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.578 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.578 { 00:16:27.578 "cntlid": 107, 00:16:27.578 "qid": 0, 00:16:27.578 "state": "enabled", 00:16:27.578 "thread": "nvmf_tgt_poll_group_000", 00:16:27.578 "listen_address": { 00:16:27.578 "trtype": "TCP", 00:16:27.578 "adrfam": "IPv4", 00:16:27.578 "traddr": "10.0.0.2", 00:16:27.578 "trsvcid": "4420" 00:16:27.578 }, 00:16:27.578 "peer_address": { 00:16:27.578 "trtype": "TCP", 00:16:27.578 "adrfam": "IPv4", 00:16:27.578 "traddr": "10.0.0.1", 00:16:27.578 "trsvcid": "58354" 00:16:27.578 }, 00:16:27.578 "auth": { 00:16:27.578 "state": "completed", 00:16:27.578 "digest": "sha512", 00:16:27.578 "dhgroup": "ffdhe2048" 00:16:27.578 } 00:16:27.578 } 00:16:27.578 ]' 00:16:27.578 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.578 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:27.578 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.578 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.578 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.578 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.578 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.578 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.835 23:53:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:16:28.768 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.768 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.768 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.768 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.768 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.768 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.768 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:28.768 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:29.057 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:29.057 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.057 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:29.057 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:29.057 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:29.057 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.057 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.057 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.057 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.057 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.057 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.057 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.315 00:16:29.315 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.315 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.315 23:53:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.878 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.878 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.878 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.878 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.878 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.878 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.878 { 00:16:29.878 "cntlid": 109, 00:16:29.878 "qid": 0, 00:16:29.878 "state": "enabled", 00:16:29.878 "thread": "nvmf_tgt_poll_group_000", 00:16:29.878 "listen_address": { 00:16:29.878 "trtype": "TCP", 00:16:29.878 "adrfam": "IPv4", 00:16:29.878 "traddr": "10.0.0.2", 00:16:29.878 "trsvcid": "4420" 00:16:29.878 }, 00:16:29.878 "peer_address": { 00:16:29.878 "trtype": "TCP", 00:16:29.879 "adrfam": "IPv4", 00:16:29.879 "traddr": "10.0.0.1", 00:16:29.879 "trsvcid": "38056" 00:16:29.879 }, 00:16:29.879 "auth": { 00:16:29.879 "state": "completed", 00:16:29.879 "digest": "sha512", 00:16:29.879 "dhgroup": "ffdhe2048" 00:16:29.879 } 00:16:29.879 } 00:16:29.879 ]' 00:16:29.879 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.879 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.879 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.879 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:29.879 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.879 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.879 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.879 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.136 23:54:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:16:31.068 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.068 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:31.068 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.068 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.068 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.068 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.068 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.068 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:31.325 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:31.325 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.325 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:31.325 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:31.325 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:31.325 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.325 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:31.325 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.325 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.325 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.325 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.325 23:54:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.583 00:16:31.583 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.583 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.583 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.840 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.840 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.840 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.840 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.840 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.840 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.840 { 00:16:31.840 "cntlid": 111, 00:16:31.840 "qid": 0, 00:16:31.840 "state": "enabled", 00:16:31.840 "thread": "nvmf_tgt_poll_group_000", 00:16:31.840 "listen_address": { 00:16:31.840 "trtype": "TCP", 00:16:31.840 "adrfam": "IPv4", 00:16:31.840 "traddr": "10.0.0.2", 00:16:31.840 "trsvcid": "4420" 00:16:31.840 }, 00:16:31.840 "peer_address": { 00:16:31.840 "trtype": "TCP", 00:16:31.840 "adrfam": "IPv4", 00:16:31.840 "traddr": "10.0.0.1", 00:16:31.840 "trsvcid": "38082" 00:16:31.840 }, 00:16:31.840 "auth": { 00:16:31.840 "state": "completed", 00:16:31.840 "digest": "sha512", 00:16:31.840 "dhgroup": "ffdhe2048" 00:16:31.840 } 00:16:31.840 } 00:16:31.840 ]' 00:16:31.840 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.840 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.840 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.098 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:32.098 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.098 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.098 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.098 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.355 23:54:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:16:33.286 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.286 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.286 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.286 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.286 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.286 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:33.286 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.286 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:33.286 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:33.543 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:33.543 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.543 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:33.543 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:33.543 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:33.543 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.543 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.543 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.543 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.543 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.543 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.543 23:54:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.801 00:16:33.801 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.801 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.801 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.058 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.058 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.058 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.058 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.058 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.058 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.058 { 00:16:34.058 "cntlid": 113, 00:16:34.058 "qid": 0, 00:16:34.058 "state": "enabled", 00:16:34.058 "thread": "nvmf_tgt_poll_group_000", 00:16:34.058 "listen_address": { 00:16:34.058 "trtype": "TCP", 00:16:34.058 "adrfam": "IPv4", 00:16:34.058 "traddr": "10.0.0.2", 00:16:34.058 "trsvcid": "4420" 00:16:34.058 }, 00:16:34.058 "peer_address": { 00:16:34.058 "trtype": "TCP", 00:16:34.058 "adrfam": "IPv4", 00:16:34.058 "traddr": "10.0.0.1", 00:16:34.058 "trsvcid": "38120" 00:16:34.058 }, 00:16:34.058 "auth": { 00:16:34.058 "state": "completed", 00:16:34.058 "digest": "sha512", 00:16:34.058 "dhgroup": "ffdhe3072" 00:16:34.058 } 00:16:34.058 } 00:16:34.058 ]' 00:16:34.059 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.059 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.059 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.316 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:34.316 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.316 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.316 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.316 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.573 23:54:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:16:35.506 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.506 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.506 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.506 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.506 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.506 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.506 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:35.506 23:54:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:35.763 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:35.763 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.763 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.763 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:35.763 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:35.763 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.763 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.763 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.763 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.763 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.763 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:35.763 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:36.021 00:16:36.021 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.021 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.021 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.278 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.278 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.278 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.278 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.278 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.278 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.278 { 00:16:36.278 "cntlid": 115, 00:16:36.278 "qid": 0, 00:16:36.278 "state": "enabled", 00:16:36.278 "thread": "nvmf_tgt_poll_group_000", 00:16:36.278 "listen_address": { 00:16:36.278 "trtype": "TCP", 00:16:36.278 "adrfam": "IPv4", 00:16:36.278 "traddr": "10.0.0.2", 00:16:36.278 "trsvcid": "4420" 00:16:36.278 }, 00:16:36.278 "peer_address": { 00:16:36.278 "trtype": "TCP", 00:16:36.278 "adrfam": "IPv4", 00:16:36.278 "traddr": "10.0.0.1", 00:16:36.278 "trsvcid": "38154" 00:16:36.278 }, 00:16:36.278 "auth": { 00:16:36.278 "state": "completed", 00:16:36.278 "digest": "sha512", 00:16:36.278 "dhgroup": "ffdhe3072" 00:16:36.278 } 00:16:36.278 } 00:16:36.278 ]' 00:16:36.278 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.278 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.278 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.278 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.278 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.535 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.535 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.535 23:54:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.792 23:54:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:16:37.725 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.725 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.725 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.725 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.725 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.725 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.725 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.725 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.725 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:37.983 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:37.983 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.983 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:37.983 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:37.983 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:37.983 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.983 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.983 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.983 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.983 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.983 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:37.983 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.240 00:16:38.240 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.240 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.240 23:54:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.498 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.498 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.498 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.498 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.498 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.498 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.498 { 00:16:38.498 "cntlid": 117, 00:16:38.498 "qid": 0, 00:16:38.498 "state": "enabled", 00:16:38.498 "thread": "nvmf_tgt_poll_group_000", 00:16:38.498 "listen_address": { 00:16:38.498 "trtype": "TCP", 00:16:38.498 "adrfam": "IPv4", 00:16:38.498 "traddr": "10.0.0.2", 00:16:38.498 "trsvcid": "4420" 00:16:38.498 }, 00:16:38.498 "peer_address": { 00:16:38.498 "trtype": "TCP", 00:16:38.498 "adrfam": "IPv4", 00:16:38.498 "traddr": "10.0.0.1", 00:16:38.498 "trsvcid": "38180" 00:16:38.498 }, 00:16:38.498 "auth": { 00:16:38.498 "state": "completed", 00:16:38.498 "digest": "sha512", 00:16:38.498 "dhgroup": "ffdhe3072" 00:16:38.498 } 00:16:38.498 } 00:16:38.498 ]' 00:16:38.498 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.498 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.498 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.498 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.498 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.756 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.756 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.756 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.014 23:54:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:16:39.948 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.948 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:39.948 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.948 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.948 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.948 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:39.948 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:39.948 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:40.206 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:40.206 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.206 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:40.206 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:40.206 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:40.206 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.206 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:40.206 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.206 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.206 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.206 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.206 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:40.463 00:16:40.463 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.463 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.463 23:54:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.721 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.721 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.721 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.721 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.721 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.721 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:40.721 { 00:16:40.721 "cntlid": 119, 00:16:40.721 "qid": 0, 00:16:40.721 "state": "enabled", 00:16:40.721 "thread": "nvmf_tgt_poll_group_000", 00:16:40.721 "listen_address": { 00:16:40.721 "trtype": "TCP", 00:16:40.721 "adrfam": "IPv4", 00:16:40.721 "traddr": "10.0.0.2", 00:16:40.721 "trsvcid": "4420" 00:16:40.721 }, 00:16:40.721 "peer_address": { 00:16:40.721 "trtype": "TCP", 00:16:40.721 "adrfam": "IPv4", 00:16:40.721 "traddr": "10.0.0.1", 00:16:40.721 "trsvcid": "34594" 00:16:40.721 }, 00:16:40.721 "auth": { 00:16:40.721 "state": "completed", 00:16:40.721 "digest": "sha512", 00:16:40.721 "dhgroup": "ffdhe3072" 00:16:40.721 } 00:16:40.721 } 00:16:40.721 ]' 00:16:40.721 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:40.721 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.721 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:40.721 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:40.721 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:40.721 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.721 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.721 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.979 23:54:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:16:41.911 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.911 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.911 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.911 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.911 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.911 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.911 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:41.911 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.911 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:41.911 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:42.169 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:42.169 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.169 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:42.169 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:42.169 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:42.169 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.169 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.169 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.169 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.169 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.169 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.169 23:54:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.734 00:16:42.734 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.734 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.734 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.990 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.990 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.990 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.990 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.990 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.990 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.990 { 00:16:42.990 "cntlid": 121, 00:16:42.990 "qid": 0, 00:16:42.990 "state": "enabled", 00:16:42.990 "thread": "nvmf_tgt_poll_group_000", 00:16:42.990 "listen_address": { 00:16:42.990 "trtype": "TCP", 00:16:42.990 "adrfam": "IPv4", 00:16:42.990 "traddr": "10.0.0.2", 00:16:42.990 "trsvcid": "4420" 00:16:42.990 }, 00:16:42.990 "peer_address": { 00:16:42.990 "trtype": "TCP", 00:16:42.990 "adrfam": "IPv4", 00:16:42.990 "traddr": "10.0.0.1", 00:16:42.990 "trsvcid": "34624" 00:16:42.990 }, 00:16:42.990 "auth": { 00:16:42.990 "state": "completed", 00:16:42.990 "digest": "sha512", 00:16:42.990 "dhgroup": "ffdhe4096" 00:16:42.990 } 00:16:42.990 } 00:16:42.990 ]' 00:16:42.990 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.990 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.990 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.990 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.990 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.990 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.990 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.990 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.247 23:54:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:16:44.206 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.206 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.206 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:44.206 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.206 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.206 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.206 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.206 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.206 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:44.464 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:44.464 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.464 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:44.464 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:44.464 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:44.464 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.464 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.464 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.464 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.464 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.464 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.464 23:54:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:45.029 00:16:45.029 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.029 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.029 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.029 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.029 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.029 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.029 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.029 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.029 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.029 { 00:16:45.029 "cntlid": 123, 00:16:45.029 "qid": 0, 00:16:45.029 "state": "enabled", 00:16:45.029 "thread": "nvmf_tgt_poll_group_000", 00:16:45.029 "listen_address": { 00:16:45.029 "trtype": "TCP", 00:16:45.029 "adrfam": "IPv4", 00:16:45.029 "traddr": "10.0.0.2", 00:16:45.029 "trsvcid": "4420" 00:16:45.029 }, 00:16:45.029 "peer_address": { 00:16:45.029 "trtype": "TCP", 00:16:45.029 "adrfam": "IPv4", 00:16:45.029 "traddr": "10.0.0.1", 00:16:45.029 "trsvcid": "34660" 00:16:45.029 }, 00:16:45.029 "auth": { 00:16:45.029 "state": "completed", 00:16:45.029 "digest": "sha512", 00:16:45.029 "dhgroup": "ffdhe4096" 00:16:45.029 } 00:16:45.029 } 00:16:45.029 ]' 00:16:45.029 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.286 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.286 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.286 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.286 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.286 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.286 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.286 23:54:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.544 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:16:46.516 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.516 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.516 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.516 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.516 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.516 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.516 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.516 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.516 23:54:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:46.774 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:46.774 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.774 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.774 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:46.774 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:46.774 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.774 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.774 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.774 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.774 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.774 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.774 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.032 00:16:47.289 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.289 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.289 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.289 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.289 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.289 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.289 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.547 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.547 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.547 { 00:16:47.547 "cntlid": 125, 00:16:47.547 "qid": 0, 00:16:47.547 "state": "enabled", 00:16:47.547 "thread": "nvmf_tgt_poll_group_000", 00:16:47.547 "listen_address": { 00:16:47.547 "trtype": "TCP", 00:16:47.547 "adrfam": "IPv4", 00:16:47.547 "traddr": "10.0.0.2", 00:16:47.547 "trsvcid": "4420" 00:16:47.547 }, 00:16:47.547 "peer_address": { 00:16:47.547 "trtype": "TCP", 00:16:47.547 "adrfam": "IPv4", 00:16:47.547 "traddr": "10.0.0.1", 00:16:47.547 "trsvcid": "34696" 00:16:47.547 }, 00:16:47.547 "auth": { 00:16:47.547 "state": "completed", 00:16:47.547 "digest": "sha512", 00:16:47.547 "dhgroup": "ffdhe4096" 00:16:47.547 } 00:16:47.547 } 00:16:47.547 ]' 00:16:47.547 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.547 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.547 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.547 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.547 23:54:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.547 23:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.547 23:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.547 23:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.805 23:54:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:16:48.737 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.737 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.737 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.737 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.737 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.737 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.737 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:48.737 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:48.995 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:48.995 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.995 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:48.995 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:48.995 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:48.995 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.995 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:48.995 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.995 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.995 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.995 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:48.995 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.561 00:16:49.561 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.561 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.561 23:54:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.561 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.561 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.561 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.561 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.819 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.819 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.819 { 00:16:49.819 "cntlid": 127, 00:16:49.819 "qid": 0, 00:16:49.819 "state": "enabled", 00:16:49.819 "thread": "nvmf_tgt_poll_group_000", 00:16:49.819 "listen_address": { 00:16:49.819 "trtype": "TCP", 00:16:49.819 "adrfam": "IPv4", 00:16:49.819 "traddr": "10.0.0.2", 00:16:49.819 "trsvcid": "4420" 00:16:49.819 }, 00:16:49.819 "peer_address": { 00:16:49.819 "trtype": "TCP", 00:16:49.819 "adrfam": "IPv4", 00:16:49.819 "traddr": "10.0.0.1", 00:16:49.819 "trsvcid": "38402" 00:16:49.819 }, 00:16:49.819 "auth": { 00:16:49.819 "state": "completed", 00:16:49.819 "digest": "sha512", 00:16:49.819 "dhgroup": "ffdhe4096" 00:16:49.819 } 00:16:49.819 } 00:16:49.819 ]' 00:16:49.819 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.819 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:49.819 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.819 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.819 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.819 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.819 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.819 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.077 23:54:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:16:51.010 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.010 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.010 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.010 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.010 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.010 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.010 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.011 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:51.011 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:51.268 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:51.268 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.268 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:51.268 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:51.268 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:51.268 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.268 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.268 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.268 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.268 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.268 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.268 23:54:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.833 00:16:51.833 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.833 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.833 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.091 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.091 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.091 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.091 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.091 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.091 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.091 { 00:16:52.091 "cntlid": 129, 00:16:52.091 "qid": 0, 00:16:52.091 "state": "enabled", 00:16:52.091 "thread": "nvmf_tgt_poll_group_000", 00:16:52.091 "listen_address": { 00:16:52.091 "trtype": "TCP", 00:16:52.091 "adrfam": "IPv4", 00:16:52.091 "traddr": "10.0.0.2", 00:16:52.091 "trsvcid": "4420" 00:16:52.091 }, 00:16:52.091 "peer_address": { 00:16:52.091 "trtype": "TCP", 00:16:52.091 "adrfam": "IPv4", 00:16:52.091 "traddr": "10.0.0.1", 00:16:52.091 "trsvcid": "38430" 00:16:52.091 }, 00:16:52.091 "auth": { 00:16:52.091 "state": "completed", 00:16:52.091 "digest": "sha512", 00:16:52.091 "dhgroup": "ffdhe6144" 00:16:52.091 } 00:16:52.091 } 00:16:52.091 ]' 00:16:52.091 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.349 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.349 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:52.349 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.349 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:52.349 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.349 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.349 23:54:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.606 23:54:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:16:53.539 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.539 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.539 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.539 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.539 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.539 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:53.539 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:53.539 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:53.796 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:53.796 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:53.796 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:53.796 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:53.796 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:53.796 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.796 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.796 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:53.796 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.796 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:53.796 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.796 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.360 00:16:54.360 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.360 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.360 23:54:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.617 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.617 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.617 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.617 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.617 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.617 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.617 { 00:16:54.617 "cntlid": 131, 00:16:54.617 "qid": 0, 00:16:54.617 "state": "enabled", 00:16:54.617 "thread": "nvmf_tgt_poll_group_000", 00:16:54.617 "listen_address": { 00:16:54.617 "trtype": "TCP", 00:16:54.617 "adrfam": "IPv4", 00:16:54.617 "traddr": "10.0.0.2", 00:16:54.617 "trsvcid": "4420" 00:16:54.617 }, 00:16:54.617 "peer_address": { 00:16:54.617 "trtype": "TCP", 00:16:54.617 "adrfam": "IPv4", 00:16:54.617 "traddr": "10.0.0.1", 00:16:54.617 "trsvcid": "38450" 00:16:54.617 }, 00:16:54.617 "auth": { 00:16:54.617 "state": "completed", 00:16:54.617 "digest": "sha512", 00:16:54.617 "dhgroup": "ffdhe6144" 00:16:54.617 } 00:16:54.617 } 00:16:54.617 ]' 00:16:54.617 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.617 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:54.617 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.617 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.617 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.875 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.875 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.875 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.133 23:54:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:16:56.064 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.064 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:56.064 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.064 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.064 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.064 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.064 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:56.064 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:56.322 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:56.322 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.322 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:56.322 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:56.322 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:56.322 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.322 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.322 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.322 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.322 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.322 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.322 23:54:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.887 00:16:56.887 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:56.887 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:56.887 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.144 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.144 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.144 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.144 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.144 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.144 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.144 { 00:16:57.144 "cntlid": 133, 00:16:57.144 "qid": 0, 00:16:57.144 "state": "enabled", 00:16:57.144 "thread": "nvmf_tgt_poll_group_000", 00:16:57.144 "listen_address": { 00:16:57.144 "trtype": "TCP", 00:16:57.144 "adrfam": "IPv4", 00:16:57.144 "traddr": "10.0.0.2", 00:16:57.144 "trsvcid": "4420" 00:16:57.144 }, 00:16:57.144 "peer_address": { 00:16:57.144 "trtype": "TCP", 00:16:57.144 "adrfam": "IPv4", 00:16:57.144 "traddr": "10.0.0.1", 00:16:57.144 "trsvcid": "38474" 00:16:57.144 }, 00:16:57.144 "auth": { 00:16:57.144 "state": "completed", 00:16:57.144 "digest": "sha512", 00:16:57.144 "dhgroup": "ffdhe6144" 00:16:57.144 } 00:16:57.144 } 00:16:57.144 ]' 00:16:57.144 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.144 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.144 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.144 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:57.144 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.400 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.400 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.400 23:54:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.657 23:54:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:16:58.590 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.590 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.590 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.590 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.590 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.590 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.590 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:58.590 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:58.855 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:58.855 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.855 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:58.855 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:58.855 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:58.855 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.856 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:58.856 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.856 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.856 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.856 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.856 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.478 00:16:59.478 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.478 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.478 23:54:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.735 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.735 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.735 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.735 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.735 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.735 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.735 { 00:16:59.735 "cntlid": 135, 00:16:59.735 "qid": 0, 00:16:59.735 "state": "enabled", 00:16:59.735 "thread": "nvmf_tgt_poll_group_000", 00:16:59.735 "listen_address": { 00:16:59.735 "trtype": "TCP", 00:16:59.735 "adrfam": "IPv4", 00:16:59.735 "traddr": "10.0.0.2", 00:16:59.735 "trsvcid": "4420" 00:16:59.735 }, 00:16:59.735 "peer_address": { 00:16:59.735 "trtype": "TCP", 00:16:59.735 "adrfam": "IPv4", 00:16:59.735 "traddr": "10.0.0.1", 00:16:59.735 "trsvcid": "48472" 00:16:59.735 }, 00:16:59.735 "auth": { 00:16:59.735 "state": "completed", 00:16:59.735 "digest": "sha512", 00:16:59.735 "dhgroup": "ffdhe6144" 00:16:59.735 } 00:16:59.735 } 00:16:59.735 ]' 00:16:59.735 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.735 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:59.735 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.735 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.735 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.735 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.735 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.735 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.994 23:54:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:17:00.927 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.927 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.927 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.927 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.927 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.927 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.927 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.927 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:00.927 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:01.185 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:17:01.185 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.185 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:01.185 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:01.185 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:01.185 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.185 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.185 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.185 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.185 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.185 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.185 23:54:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.119 00:17:02.119 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.119 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.119 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.377 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.377 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.377 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.377 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.377 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.377 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.377 { 00:17:02.377 "cntlid": 137, 00:17:02.377 "qid": 0, 00:17:02.377 "state": "enabled", 00:17:02.377 "thread": "nvmf_tgt_poll_group_000", 00:17:02.377 "listen_address": { 00:17:02.377 "trtype": "TCP", 00:17:02.377 "adrfam": "IPv4", 00:17:02.377 "traddr": "10.0.0.2", 00:17:02.377 "trsvcid": "4420" 00:17:02.377 }, 00:17:02.377 "peer_address": { 00:17:02.377 "trtype": "TCP", 00:17:02.377 "adrfam": "IPv4", 00:17:02.377 "traddr": "10.0.0.1", 00:17:02.377 "trsvcid": "48502" 00:17:02.377 }, 00:17:02.377 "auth": { 00:17:02.377 "state": "completed", 00:17:02.377 "digest": "sha512", 00:17:02.377 "dhgroup": "ffdhe8192" 00:17:02.377 } 00:17:02.377 } 00:17:02.377 ]' 00:17:02.377 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.377 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:02.377 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.377 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:02.377 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.377 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.377 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.377 23:54:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:02.941 23:54:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.875 23:54:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.806 00:17:04.806 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.806 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.806 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.064 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.064 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.064 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.064 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.064 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.064 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.064 { 00:17:05.064 "cntlid": 139, 00:17:05.064 "qid": 0, 00:17:05.064 "state": "enabled", 00:17:05.064 "thread": "nvmf_tgt_poll_group_000", 00:17:05.064 "listen_address": { 00:17:05.064 "trtype": "TCP", 00:17:05.064 "adrfam": "IPv4", 00:17:05.064 "traddr": "10.0.0.2", 00:17:05.064 "trsvcid": "4420" 00:17:05.064 }, 00:17:05.064 "peer_address": { 00:17:05.064 "trtype": "TCP", 00:17:05.064 "adrfam": "IPv4", 00:17:05.064 "traddr": "10.0.0.1", 00:17:05.064 "trsvcid": "48516" 00:17:05.064 }, 00:17:05.064 "auth": { 00:17:05.064 "state": "completed", 00:17:05.064 "digest": "sha512", 00:17:05.064 "dhgroup": "ffdhe8192" 00:17:05.064 } 00:17:05.064 } 00:17:05.064 ]' 00:17:05.064 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:05.064 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.064 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:05.321 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:05.321 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:05.321 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.321 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.321 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:05.578 23:54:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTEzOTZhNDdmOGJjZGYwNmMwZDNjNmI1ODU3Y2ViYzAnMdaF: --dhchap-ctrl-secret DHHC-1:02:ZDdmODVlMWE0NzhhOGIwOTM3ZDcyZGMyMzk0Y2YzNjFjNDg1OTUyMGU1NTAxYWFjhKE+og==: 00:17:06.509 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.509 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.510 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.510 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.510 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.510 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.510 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.510 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.510 23:54:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.767 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:17:06.767 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.767 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:06.767 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:06.767 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:06.767 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.767 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.767 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.767 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.767 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.767 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.767 23:54:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.698 00:17:07.698 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.698 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.698 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.956 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.956 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.956 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.956 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.956 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.956 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.956 { 00:17:07.956 "cntlid": 141, 00:17:07.956 "qid": 0, 00:17:07.956 "state": "enabled", 00:17:07.956 "thread": "nvmf_tgt_poll_group_000", 00:17:07.956 "listen_address": { 00:17:07.956 "trtype": "TCP", 00:17:07.956 "adrfam": "IPv4", 00:17:07.956 "traddr": "10.0.0.2", 00:17:07.956 "trsvcid": "4420" 00:17:07.956 }, 00:17:07.956 "peer_address": { 00:17:07.956 "trtype": "TCP", 00:17:07.956 "adrfam": "IPv4", 00:17:07.956 "traddr": "10.0.0.1", 00:17:07.956 "trsvcid": "48556" 00:17:07.956 }, 00:17:07.956 "auth": { 00:17:07.956 "state": "completed", 00:17:07.956 "digest": "sha512", 00:17:07.956 "dhgroup": "ffdhe8192" 00:17:07.956 } 00:17:07.956 } 00:17:07.956 ]' 00:17:07.956 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.956 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.956 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.956 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:07.956 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.956 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.956 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.956 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.213 23:54:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YzM3N2ZlODljZTlhNDY5NzA0YWVmYzc0NWVmNzU3N2I1N2VhY2ZjODllMTIzYjkymGg5QA==: --dhchap-ctrl-secret DHHC-1:01:NTNlNWZkN2NmZTA5MzRkZGZiYzIwNjFkYTRkNjJjNDTL7hgS: 00:17:09.146 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.146 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.146 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.146 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.146 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.146 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.146 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.146 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:09.146 23:54:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:09.712 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:17:09.712 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.712 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:09.712 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:09.712 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:09.712 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.712 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:09.712 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.712 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.712 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.712 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.712 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:10.646 00:17:10.646 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.646 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.646 23:54:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.646 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.646 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.646 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.646 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.646 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.646 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.646 { 00:17:10.646 "cntlid": 143, 00:17:10.646 "qid": 0, 00:17:10.646 "state": "enabled", 00:17:10.646 "thread": "nvmf_tgt_poll_group_000", 00:17:10.646 "listen_address": { 00:17:10.646 "trtype": "TCP", 00:17:10.646 "adrfam": "IPv4", 00:17:10.646 "traddr": "10.0.0.2", 00:17:10.646 "trsvcid": "4420" 00:17:10.646 }, 00:17:10.646 "peer_address": { 00:17:10.646 "trtype": "TCP", 00:17:10.646 "adrfam": "IPv4", 00:17:10.646 "traddr": "10.0.0.1", 00:17:10.646 "trsvcid": "38496" 00:17:10.646 }, 00:17:10.646 "auth": { 00:17:10.646 "state": "completed", 00:17:10.646 "digest": "sha512", 00:17:10.646 "dhgroup": "ffdhe8192" 00:17:10.646 } 00:17:10.646 } 00:17:10.646 ]' 00:17:10.646 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.904 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.904 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.904 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:10.904 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.904 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.904 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.904 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.162 23:54:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:17:12.095 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.095 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:12.095 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.095 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.095 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.095 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:12.095 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:17:12.095 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:17:12.095 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:12.095 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:12.095 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:12.353 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:17:12.353 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.353 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:12.353 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:12.353 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:12.353 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.353 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.353 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.353 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.353 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.353 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.353 23:54:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.283 00:17:13.283 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.283 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.283 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.541 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.541 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.541 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.541 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.541 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.541 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.541 { 00:17:13.541 "cntlid": 145, 00:17:13.541 "qid": 0, 00:17:13.541 "state": "enabled", 00:17:13.541 "thread": "nvmf_tgt_poll_group_000", 00:17:13.541 "listen_address": { 00:17:13.541 "trtype": "TCP", 00:17:13.541 "adrfam": "IPv4", 00:17:13.541 "traddr": "10.0.0.2", 00:17:13.541 "trsvcid": "4420" 00:17:13.541 }, 00:17:13.541 "peer_address": { 00:17:13.541 "trtype": "TCP", 00:17:13.541 "adrfam": "IPv4", 00:17:13.541 "traddr": "10.0.0.1", 00:17:13.541 "trsvcid": "38524" 00:17:13.541 }, 00:17:13.541 "auth": { 00:17:13.541 "state": "completed", 00:17:13.541 "digest": "sha512", 00:17:13.541 "dhgroup": "ffdhe8192" 00:17:13.541 } 00:17:13.541 } 00:17:13.541 ]' 00:17:13.541 23:54:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.541 23:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.541 23:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.541 23:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:13.541 23:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.541 23:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.541 23:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.541 23:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.819 23:54:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjI2Y2M3Y2EyYWUwZmMwODgyMjI1MTlmZjJhMDA1NTE1MTU3MWVhZjM4YjFkNGEweEdSlA==: --dhchap-ctrl-secret DHHC-1:03:MTFjNjQ0ZjUxOWM0YmM5ZjZjMDliM2E0NWIxMGU4ZWM3ZGMyNDcxYjk5ZTVmOWEzMmIzZWNhMzlmYTE4ZmZjMgLBZMc=: 00:17:14.777 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.777 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:14.777 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.777 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.777 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.777 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:14.777 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.777 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.777 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.035 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:15.035 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:15.035 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:15.035 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:15.035 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:15.035 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:15.035 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:15.035 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:15.035 23:54:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:17:15.601 request: 00:17:15.601 { 00:17:15.601 "name": "nvme0", 00:17:15.601 "trtype": "tcp", 00:17:15.601 "traddr": "10.0.0.2", 00:17:15.601 "adrfam": "ipv4", 00:17:15.601 "trsvcid": "4420", 00:17:15.601 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:15.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:15.601 "prchk_reftag": false, 00:17:15.601 "prchk_guard": false, 00:17:15.601 "hdgst": false, 00:17:15.601 "ddgst": false, 00:17:15.601 "dhchap_key": "key2", 00:17:15.601 "method": "bdev_nvme_attach_controller", 00:17:15.601 "req_id": 1 00:17:15.601 } 00:17:15.601 Got JSON-RPC error response 00:17:15.601 response: 00:17:15.601 { 00:17:15.601 "code": -5, 00:17:15.601 "message": "Input/output error" 00:17:15.601 } 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:15.601 23:54:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:16.534 request: 00:17:16.534 { 00:17:16.534 "name": "nvme0", 00:17:16.534 "trtype": "tcp", 00:17:16.534 "traddr": "10.0.0.2", 00:17:16.534 "adrfam": "ipv4", 00:17:16.534 "trsvcid": "4420", 00:17:16.534 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:16.534 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:16.534 "prchk_reftag": false, 00:17:16.534 "prchk_guard": false, 00:17:16.534 "hdgst": false, 00:17:16.534 "ddgst": false, 00:17:16.534 "dhchap_key": "key1", 00:17:16.534 "dhchap_ctrlr_key": "ckey2", 00:17:16.534 "method": "bdev_nvme_attach_controller", 00:17:16.534 "req_id": 1 00:17:16.534 } 00:17:16.534 Got JSON-RPC error response 00:17:16.534 response: 00:17:16.534 { 00:17:16.534 "code": -5, 00:17:16.534 "message": "Input/output error" 00:17:16.534 } 00:17:16.534 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:16.534 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:16.534 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:16.534 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:16.534 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.534 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.534 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.534 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.534 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:17:16.534 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.534 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.534 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.534 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.535 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:16.535 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.535 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:16.535 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.535 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:16.535 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.535 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.535 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.467 request: 00:17:17.467 { 00:17:17.467 "name": "nvme0", 00:17:17.467 "trtype": "tcp", 00:17:17.467 "traddr": "10.0.0.2", 00:17:17.467 "adrfam": "ipv4", 00:17:17.467 "trsvcid": "4420", 00:17:17.467 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:17.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:17.467 "prchk_reftag": false, 00:17:17.467 "prchk_guard": false, 00:17:17.467 "hdgst": false, 00:17:17.467 "ddgst": false, 00:17:17.467 "dhchap_key": "key1", 00:17:17.467 "dhchap_ctrlr_key": "ckey1", 00:17:17.467 "method": "bdev_nvme_attach_controller", 00:17:17.467 "req_id": 1 00:17:17.467 } 00:17:17.467 Got JSON-RPC error response 00:17:17.467 response: 00:17:17.467 { 00:17:17.467 "code": -5, 00:17:17.467 "message": "Input/output error" 00:17:17.467 } 00:17:17.467 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:17.467 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:17.467 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:17.467 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 3366986 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3366986 ']' 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3366986 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3366986 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3366986' 00:17:17.468 killing process with pid 3366986 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3366986 00:17:17.468 23:54:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3366986 00:17:17.725 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:17.725 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:17.725 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:17.725 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.725 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=3389580 00:17:17.725 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:17.725 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 3389580 00:17:17.725 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3389580 ']' 00:17:17.726 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.726 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.726 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.726 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.726 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.983 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:17.983 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:17.983 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:17.983 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:17.983 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.983 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.983 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:17.983 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 3389580 00:17:17.983 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 3389580 ']' 00:17:17.983 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.983 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:17.983 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.983 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:17.983 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.241 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.241 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:17:18.241 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:17:18.241 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.241 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.499 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.499 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:17:18.499 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.499 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:18.499 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:18.499 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:18.499 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.499 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:18.499 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.499 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.499 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.499 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:18.499 23:54:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:19.429 00:17:19.429 23:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:19.429 23:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:19.429 23:54:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.687 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.687 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.687 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.687 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.687 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.687 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:19.687 { 00:17:19.687 "cntlid": 1, 00:17:19.687 "qid": 0, 00:17:19.687 "state": "enabled", 00:17:19.687 "thread": "nvmf_tgt_poll_group_000", 00:17:19.687 "listen_address": { 00:17:19.687 "trtype": "TCP", 00:17:19.687 "adrfam": "IPv4", 00:17:19.687 "traddr": "10.0.0.2", 00:17:19.687 "trsvcid": "4420" 00:17:19.687 }, 00:17:19.687 "peer_address": { 00:17:19.687 "trtype": "TCP", 00:17:19.687 "adrfam": "IPv4", 00:17:19.687 "traddr": "10.0.0.1", 00:17:19.687 "trsvcid": "38572" 00:17:19.687 }, 00:17:19.687 "auth": { 00:17:19.687 "state": "completed", 00:17:19.687 "digest": "sha512", 00:17:19.687 "dhgroup": "ffdhe8192" 00:17:19.687 } 00:17:19.687 } 00:17:19.687 ]' 00:17:19.687 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:19.687 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:19.687 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.687 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.687 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.687 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.687 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.687 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.944 23:54:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YzUwYjA3ZjVlZTY2YjRjZmM3NDhhNWY3NTNhNDY2Njk3ODY5NTIwMDdmZDAxZmExYTljMTIzZDk4NzE4M2VjOK6X1pg=: 00:17:20.876 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.876 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:20.876 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.876 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.132 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.132 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:17:21.132 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.132 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.132 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.132 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:21.132 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:21.389 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.389 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:21.389 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.389 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:21.389 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:21.389 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:21.389 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:21.389 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.389 23:54:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.646 request: 00:17:21.646 { 00:17:21.646 "name": "nvme0", 00:17:21.646 "trtype": "tcp", 00:17:21.646 "traddr": "10.0.0.2", 00:17:21.646 "adrfam": "ipv4", 00:17:21.646 "trsvcid": "4420", 00:17:21.646 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:21.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:21.646 "prchk_reftag": false, 00:17:21.646 "prchk_guard": false, 00:17:21.646 "hdgst": false, 00:17:21.646 "ddgst": false, 00:17:21.646 "dhchap_key": "key3", 00:17:21.646 "method": "bdev_nvme_attach_controller", 00:17:21.646 "req_id": 1 00:17:21.646 } 00:17:21.646 Got JSON-RPC error response 00:17:21.646 response: 00:17:21.646 { 00:17:21.646 "code": -5, 00:17:21.646 "message": "Input/output error" 00:17:21.646 } 00:17:21.646 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:21.646 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:21.646 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:21.646 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:21.646 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:17:21.646 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:17:21.646 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:21.646 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:21.903 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.903 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:21.903 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.903 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:21.903 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:21.903 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:21.903 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:21.903 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:21.903 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.160 request: 00:17:22.160 { 00:17:22.160 "name": "nvme0", 00:17:22.160 "trtype": "tcp", 00:17:22.160 "traddr": "10.0.0.2", 00:17:22.160 "adrfam": "ipv4", 00:17:22.160 "trsvcid": "4420", 00:17:22.160 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:22.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:22.160 "prchk_reftag": false, 00:17:22.160 "prchk_guard": false, 00:17:22.160 "hdgst": false, 00:17:22.160 "ddgst": false, 00:17:22.160 "dhchap_key": "key3", 00:17:22.160 "method": "bdev_nvme_attach_controller", 00:17:22.160 "req_id": 1 00:17:22.160 } 00:17:22.160 Got JSON-RPC error response 00:17:22.160 response: 00:17:22.160 { 00:17:22.160 "code": -5, 00:17:22.160 "message": "Input/output error" 00:17:22.160 } 00:17:22.160 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:22.160 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:22.160 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:22.160 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:22.160 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:22.160 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:17:22.160 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:17:22.160 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:22.160 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:22.160 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:22.417 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.417 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.417 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.418 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.418 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:22.418 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.418 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.418 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.418 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:22.418 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:17:22.418 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:22.418 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:17:22.418 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:22.418 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:17:22.418 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:22.418 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:22.418 23:54:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:22.675 request: 00:17:22.675 { 00:17:22.675 "name": "nvme0", 00:17:22.675 "trtype": "tcp", 00:17:22.675 "traddr": "10.0.0.2", 00:17:22.675 "adrfam": "ipv4", 00:17:22.675 "trsvcid": "4420", 00:17:22.675 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:22.675 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:22.675 "prchk_reftag": false, 00:17:22.675 "prchk_guard": false, 00:17:22.675 "hdgst": false, 00:17:22.675 "ddgst": false, 00:17:22.675 "dhchap_key": "key0", 00:17:22.675 "dhchap_ctrlr_key": "key1", 00:17:22.675 "method": "bdev_nvme_attach_controller", 00:17:22.675 "req_id": 1 00:17:22.675 } 00:17:22.675 Got JSON-RPC error response 00:17:22.675 response: 00:17:22.675 { 00:17:22.675 "code": -5, 00:17:22.675 "message": "Input/output error" 00:17:22.675 } 00:17:22.675 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:17:22.675 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:22.675 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:22.675 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:22.675 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:22.675 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:22.932 00:17:22.932 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:22.932 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:22.932 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.189 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.189 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.189 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.447 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:23.447 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:23.447 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3367104 00:17:23.447 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3367104 ']' 00:17:23.447 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3367104 00:17:23.447 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:23.447 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.447 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3367104 00:17:23.447 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:23.447 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:23.447 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3367104' 00:17:23.447 killing process with pid 3367104 00:17:23.447 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3367104 00:17:23.447 23:54:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3367104 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:24.011 rmmod nvme_tcp 00:17:24.011 rmmod nvme_fabrics 00:17:24.011 rmmod nvme_keyring 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 3389580 ']' 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 3389580 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 3389580 ']' 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 3389580 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3389580 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3389580' 00:17:24.011 killing process with pid 3389580 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 3389580 00:17:24.011 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 3389580 00:17:24.268 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:24.268 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:24.268 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:24.268 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.268 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.269 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.269 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:24.269 23:54:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1in /tmp/spdk.key-sha256.04s /tmp/spdk.key-sha384.aVh /tmp/spdk.key-sha512.epl /tmp/spdk.key-sha512.WLG /tmp/spdk.key-sha384.AEk /tmp/spdk.key-sha256.QTU '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:26.796 00:17:26.796 real 3m10.098s 00:17:26.796 user 7m21.780s 00:17:26.796 sys 0m25.022s 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.796 ************************************ 00:17:26.796 END TEST nvmf_auth_target 00:17:26.796 ************************************ 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:26.796 ************************************ 00:17:26.796 START TEST nvmf_bdevio_no_huge 00:17:26.796 ************************************ 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:26.796 * Looking for test storage... 00:17:26.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.796 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:26.797 23:54:56 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:28.170 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:28.428 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.428 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:28.429 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:28.429 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:28.429 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:28.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:28.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:17:28.429 00:17:28.429 --- 10.0.0.2 ping statistics --- 00:17:28.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.429 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:28.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:28.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:17:28.429 00:17:28.429 --- 10.0.0.1 ping statistics --- 00:17:28.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:28.429 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=3392327 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 3392327 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 3392327 ']' 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:28.429 23:54:58 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.429 [2024-07-24 23:54:58.992076] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:17:28.429 [2024-07-24 23:54:58.992150] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:28.688 [2024-07-24 23:54:59.060884] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:28.688 [2024-07-24 23:54:59.169436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.688 [2024-07-24 23:54:59.169497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.688 [2024-07-24 23:54:59.169525] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.688 [2024-07-24 23:54:59.169537] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.688 [2024-07-24 23:54:59.169546] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.688 [2024-07-24 23:54:59.169679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:28.688 [2024-07-24 23:54:59.169742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:28.688 [2024-07-24 23:54:59.169807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:28.688 [2024-07-24 23:54:59.169809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:28.688 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:28.688 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:17:28.688 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:28.688 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:28.688 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.688 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.688 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:28.688 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.688 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.945 [2024-07-24 23:54:59.301799] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.945 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.946 Malloc0 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:28.946 [2024-07-24 23:54:59.339986] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:28.946 { 00:17:28.946 "params": { 00:17:28.946 "name": "Nvme$subsystem", 00:17:28.946 "trtype": "$TEST_TRANSPORT", 00:17:28.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:28.946 "adrfam": "ipv4", 00:17:28.946 "trsvcid": "$NVMF_PORT", 00:17:28.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:28.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:28.946 "hdgst": ${hdgst:-false}, 00:17:28.946 "ddgst": ${ddgst:-false} 00:17:28.946 }, 00:17:28.946 "method": "bdev_nvme_attach_controller" 00:17:28.946 } 00:17:28.946 EOF 00:17:28.946 )") 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:28.946 23:54:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:28.946 "params": { 00:17:28.946 "name": "Nvme1", 00:17:28.946 "trtype": "tcp", 00:17:28.946 "traddr": "10.0.0.2", 00:17:28.946 "adrfam": "ipv4", 00:17:28.946 "trsvcid": "4420", 00:17:28.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.946 "hdgst": false, 00:17:28.946 "ddgst": false 00:17:28.946 }, 00:17:28.946 "method": "bdev_nvme_attach_controller" 00:17:28.946 }' 00:17:28.946 [2024-07-24 23:54:59.387609] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:17:28.946 [2024-07-24 23:54:59.387693] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3392351 ] 00:17:28.946 [2024-07-24 23:54:59.449940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:29.203 [2024-07-24 23:54:59.566275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.203 [2024-07-24 23:54:59.566302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.203 [2024-07-24 23:54:59.566305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.203 I/O targets: 00:17:29.203 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:29.203 00:17:29.203 00:17:29.203 CUnit - A unit testing framework for C - Version 2.1-3 00:17:29.203 http://cunit.sourceforge.net/ 00:17:29.203 00:17:29.203 00:17:29.203 Suite: bdevio tests on: Nvme1n1 00:17:29.203 Test: blockdev write read block ...passed 00:17:29.203 Test: blockdev write zeroes read block ...passed 00:17:29.460 Test: blockdev write zeroes read no split ...passed 00:17:29.460 Test: blockdev write zeroes read split ...passed 00:17:29.460 Test: blockdev write zeroes read split partial ...passed 00:17:29.461 Test: blockdev reset ...[2024-07-24 23:54:59.890776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:29.461 [2024-07-24 23:54:59.890885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eadfb0 (9): Bad file descriptor 00:17:29.461 [2024-07-24 23:54:59.904026] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:29.461 passed 00:17:29.461 Test: blockdev write read 8 blocks ...passed 00:17:29.461 Test: blockdev write read size > 128k ...passed 00:17:29.461 Test: blockdev write read invalid size ...passed 00:17:29.461 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:29.461 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:29.461 Test: blockdev write read max offset ...passed 00:17:29.461 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:29.461 Test: blockdev writev readv 8 blocks ...passed 00:17:29.718 Test: blockdev writev readv 30 x 1block ...passed 00:17:29.718 Test: blockdev writev readv block ...passed 00:17:29.718 Test: blockdev writev readv size > 128k ...passed 00:17:29.718 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:29.718 Test: blockdev comparev and writev ...[2024-07-24 23:55:00.159995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.718 [2024-07-24 23:55:00.160047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:29.718 [2024-07-24 23:55:00.160072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.718 [2024-07-24 23:55:00.160089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:29.718 [2024-07-24 23:55:00.160463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.718 [2024-07-24 23:55:00.160487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:29.718 [2024-07-24 23:55:00.160509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.718 [2024-07-24 23:55:00.160526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:29.718 [2024-07-24 23:55:00.160908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.718 [2024-07-24 23:55:00.160932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:29.718 [2024-07-24 23:55:00.160953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.718 [2024-07-24 23:55:00.160970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:29.718 [2024-07-24 23:55:00.161333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.718 [2024-07-24 23:55:00.161357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:29.718 [2024-07-24 23:55:00.161378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:29.718 [2024-07-24 23:55:00.161394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:29.718 passed 00:17:29.718 Test: blockdev nvme passthru rw ...passed 00:17:29.718 Test: blockdev nvme passthru vendor specific ...[2024-07-24 23:55:00.245522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.718 [2024-07-24 23:55:00.245549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:29.718 [2024-07-24 23:55:00.245739] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.718 [2024-07-24 23:55:00.245762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:29.718 [2024-07-24 23:55:00.245948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.718 [2024-07-24 23:55:00.245971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:29.718 [2024-07-24 23:55:00.246165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:29.718 [2024-07-24 23:55:00.246188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:29.718 passed 00:17:29.718 Test: blockdev nvme admin passthru ...passed 00:17:29.718 Test: blockdev copy ...passed 00:17:29.718 00:17:29.718 Run Summary: Type Total Ran Passed Failed Inactive 00:17:29.718 suites 1 1 n/a 0 0 00:17:29.718 tests 23 23 23 0 0 00:17:29.718 asserts 152 152 152 0 n/a 00:17:29.718 00:17:29.718 Elapsed time = 1.156 seconds 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:30.289 rmmod nvme_tcp 00:17:30.289 rmmod nvme_fabrics 00:17:30.289 rmmod nvme_keyring 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 3392327 ']' 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 3392327 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 3392327 ']' 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 3392327 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3392327 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3392327' 00:17:30.289 killing process with pid 3392327 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 3392327 00:17:30.289 23:55:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 3392327 00:17:30.599 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:30.599 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:30.599 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:30.599 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:30.599 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:30.599 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:30.599 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:30.599 23:55:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:33.126 00:17:33.126 real 0m6.385s 00:17:33.126 user 0m10.290s 00:17:33.126 sys 0m2.376s 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:33.126 ************************************ 00:17:33.126 END TEST nvmf_bdevio_no_huge 00:17:33.126 ************************************ 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:33.126 ************************************ 00:17:33.126 START TEST nvmf_tls 00:17:33.126 ************************************ 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:33.126 * Looking for test storage... 00:17:33.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:33.126 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:33.127 23:55:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:35.027 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:35.027 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:35.027 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.027 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:35.028 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:35.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:35.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:17:35.028 00:17:35.028 --- 10.0.0.2 ping statistics --- 00:17:35.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.028 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:35.028 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:35.028 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:17:35.028 00:17:35.028 --- 10.0.0.1 ping statistics --- 00:17:35.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:35.028 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3394437 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3394437 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3394437 ']' 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.028 [2024-07-24 23:55:05.418733] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:17:35.028 [2024-07-24 23:55:05.418825] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.028 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.028 [2024-07-24 23:55:05.486532] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.028 [2024-07-24 23:55:05.593141] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.028 [2024-07-24 23:55:05.593191] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.028 [2024-07-24 23:55:05.593219] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.028 [2024-07-24 23:55:05.593230] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.028 [2024-07-24 23:55:05.593240] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.028 [2024-07-24 23:55:05.593297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:35.028 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.286 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.286 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:35.286 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:35.286 true 00:17:35.542 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:35.543 23:55:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:35.543 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:35.543 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:35.543 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:35.799 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:35.799 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:36.363 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:36.363 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:36.363 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:36.363 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:36.363 23:55:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:36.620 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:36.620 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:36.620 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:36.621 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:36.878 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:36.878 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:36.878 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:37.135 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:37.135 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:37.392 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:37.392 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:37.392 23:55:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:37.649 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:37.649 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:37.907 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:38.165 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:38.165 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:38.165 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.KQSforYgvC 00:17:38.165 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:38.165 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.sLYUuCHSaz 00:17:38.165 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:38.165 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:38.165 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.KQSforYgvC 00:17:38.165 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.sLYUuCHSaz 00:17:38.165 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:38.423 23:55:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:38.680 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.KQSforYgvC 00:17:38.680 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.KQSforYgvC 00:17:38.680 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:38.938 [2024-07-24 23:55:09.449082] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:38.938 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:39.196 23:55:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:39.453 [2024-07-24 23:55:10.014645] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:39.453 [2024-07-24 23:55:10.014926] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:39.453 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:39.711 malloc0 00:17:39.711 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:39.969 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KQSforYgvC 00:17:40.228 [2024-07-24 23:55:10.735670] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:40.228 23:55:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.KQSforYgvC 00:17:40.228 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.422 Initializing NVMe Controllers 00:17:52.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:52.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:52.422 Initialization complete. Launching workers. 00:17:52.422 ======================================================== 00:17:52.422 Latency(us) 00:17:52.422 Device Information : IOPS MiB/s Average min max 00:17:52.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7826.48 30.57 8180.01 1271.40 10263.88 00:17:52.422 ======================================================== 00:17:52.422 Total : 7826.48 30.57 8180.01 1271.40 10263.88 00:17:52.422 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.KQSforYgvC 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.KQSforYgvC' 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3396312 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3396312 /var/tmp/bdevperf.sock 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3396312 ']' 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:52.422 23:55:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:52.422 [2024-07-24 23:55:20.918694] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:17:52.422 [2024-07-24 23:55:20.918773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3396312 ] 00:17:52.422 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.422 [2024-07-24 23:55:20.975575] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.422 [2024-07-24 23:55:21.081563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:52.422 23:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.422 23:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:17:52.422 23:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KQSforYgvC 00:17:52.422 [2024-07-24 23:55:21.462708] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:52.422 [2024-07-24 23:55:21.462835] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:52.422 TLSTESTn1 00:17:52.422 23:55:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:52.422 Running I/O for 10 seconds... 00:18:02.412 00:18:02.412 Latency(us) 00:18:02.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.412 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:02.412 Verification LBA range: start 0x0 length 0x2000 00:18:02.412 TLSTESTn1 : 10.02 3167.32 12.37 0.00 0.00 40336.22 7281.78 45049.93 00:18:02.412 =================================================================================================================== 00:18:02.412 Total : 3167.32 12.37 0.00 0.00 40336.22 7281.78 45049.93 00:18:02.412 0 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3396312 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3396312 ']' 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3396312 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3396312 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3396312' 00:18:02.412 killing process with pid 3396312 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3396312 00:18:02.412 Received shutdown signal, test time was about 10.000000 seconds 00:18:02.412 00:18:02.412 Latency(us) 00:18:02.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.412 =================================================================================================================== 00:18:02.412 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:02.412 [2024-07-24 23:55:31.751521] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3396312 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sLYUuCHSaz 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sLYUuCHSaz 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.sLYUuCHSaz 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.sLYUuCHSaz' 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3397627 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3397627 /var/tmp/bdevperf.sock 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3397627 ']' 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.412 23:55:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.412 [2024-07-24 23:55:32.036214] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:02.412 [2024-07-24 23:55:32.036331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397627 ] 00:18:02.412 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.412 [2024-07-24 23:55:32.095842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.412 [2024-07-24 23:55:32.204829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.sLYUuCHSaz 00:18:02.412 [2024-07-24 23:55:32.524374] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.412 [2024-07-24 23:55:32.524520] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:02.412 [2024-07-24 23:55:32.531471] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:02.412 [2024-07-24 23:55:32.531618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2391f90 (107): Transport endpoint is not connected 00:18:02.412 [2024-07-24 23:55:32.532609] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2391f90 (9): Bad file descriptor 00:18:02.412 [2024-07-24 23:55:32.533607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:02.412 [2024-07-24 23:55:32.533624] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:02.412 [2024-07-24 23:55:32.533656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:02.412 request: 00:18:02.412 { 00:18:02.412 "name": "TLSTEST", 00:18:02.412 "trtype": "tcp", 00:18:02.412 "traddr": "10.0.0.2", 00:18:02.412 "adrfam": "ipv4", 00:18:02.412 "trsvcid": "4420", 00:18:02.412 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.412 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:02.412 "prchk_reftag": false, 00:18:02.412 "prchk_guard": false, 00:18:02.412 "hdgst": false, 00:18:02.412 "ddgst": false, 00:18:02.412 "psk": "/tmp/tmp.sLYUuCHSaz", 00:18:02.412 "method": "bdev_nvme_attach_controller", 00:18:02.412 "req_id": 1 00:18:02.412 } 00:18:02.412 Got JSON-RPC error response 00:18:02.412 response: 00:18:02.412 { 00:18:02.412 "code": -5, 00:18:02.412 "message": "Input/output error" 00:18:02.412 } 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3397627 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3397627 ']' 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3397627 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3397627 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3397627' 00:18:02.412 killing process with pid 3397627 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3397627 00:18:02.412 Received shutdown signal, test time was about 10.000000 seconds 00:18:02.412 00:18:02.412 Latency(us) 00:18:02.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.412 =================================================================================================================== 00:18:02.412 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:02.412 [2024-07-24 23:55:32.582574] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3397627 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:02.412 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.KQSforYgvC 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.KQSforYgvC 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.KQSforYgvC 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.KQSforYgvC' 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3397679 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3397679 /var/tmp/bdevperf.sock 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3397679 ']' 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:02.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:02.413 23:55:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.413 [2024-07-24 23:55:32.888005] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:02.413 [2024-07-24 23:55:32.888082] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397679 ] 00:18:02.413 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.413 [2024-07-24 23:55:32.947625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.670 [2024-07-24 23:55:33.053604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:02.670 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:02.670 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:02.670 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.KQSforYgvC 00:18:02.928 [2024-07-24 23:55:33.402715] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:02.928 [2024-07-24 23:55:33.402825] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:02.928 [2024-07-24 23:55:33.413600] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:02.928 [2024-07-24 23:55:33.413629] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:02.928 [2024-07-24 23:55:33.413678] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:02.928 [2024-07-24 23:55:33.413755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6bf90 (107): Transport endpoint is not connected 00:18:02.928 [2024-07-24 23:55:33.414688] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa6bf90 (9): Bad file descriptor 00:18:02.928 [2024-07-24 23:55:33.415687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:02.928 [2024-07-24 23:55:33.415710] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:02.928 [2024-07-24 23:55:33.415741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:02.928 request: 00:18:02.928 { 00:18:02.928 "name": "TLSTEST", 00:18:02.928 "trtype": "tcp", 00:18:02.928 "traddr": "10.0.0.2", 00:18:02.928 "adrfam": "ipv4", 00:18:02.928 "trsvcid": "4420", 00:18:02.928 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:02.928 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:02.928 "prchk_reftag": false, 00:18:02.928 "prchk_guard": false, 00:18:02.928 "hdgst": false, 00:18:02.928 "ddgst": false, 00:18:02.928 "psk": "/tmp/tmp.KQSforYgvC", 00:18:02.928 "method": "bdev_nvme_attach_controller", 00:18:02.928 "req_id": 1 00:18:02.928 } 00:18:02.928 Got JSON-RPC error response 00:18:02.928 response: 00:18:02.928 { 00:18:02.928 "code": -5, 00:18:02.928 "message": "Input/output error" 00:18:02.928 } 00:18:02.928 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3397679 00:18:02.928 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3397679 ']' 00:18:02.928 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3397679 00:18:02.928 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:02.928 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:02.928 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3397679 00:18:02.928 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:02.928 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:02.928 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3397679' 00:18:02.928 killing process with pid 3397679 00:18:02.928 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3397679 00:18:02.928 Received shutdown signal, test time was about 10.000000 seconds 00:18:02.928 00:18:02.928 Latency(us) 00:18:02.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.928 =================================================================================================================== 00:18:02.929 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:02.929 [2024-07-24 23:55:33.458925] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:02.929 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3397679 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.KQSforYgvC 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.KQSforYgvC 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.KQSforYgvC 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.KQSforYgvC' 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3397787 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3397787 /var/tmp/bdevperf.sock 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3397787 ']' 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.186 23:55:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.186 [2024-07-24 23:55:33.730016] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:03.186 [2024-07-24 23:55:33.730098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397787 ] 00:18:03.186 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.186 [2024-07-24 23:55:33.788206] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.443 [2024-07-24 23:55:33.896655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.443 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:03.443 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:03.443 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.KQSforYgvC 00:18:03.700 [2024-07-24 23:55:34.242440] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.700 [2024-07-24 23:55:34.242590] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:03.700 [2024-07-24 23:55:34.248083] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:03.700 [2024-07-24 23:55:34.248113] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:03.700 [2024-07-24 23:55:34.248168] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:03.700 [2024-07-24 23:55:34.248673] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2272f90 (107): Transport endpoint is not connected 00:18:03.700 [2024-07-24 23:55:34.249659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2272f90 (9): Bad file descriptor 00:18:03.700 [2024-07-24 23:55:34.250657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:18:03.700 [2024-07-24 23:55:34.250678] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:03.700 [2024-07-24 23:55:34.250708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:18:03.700 request: 00:18:03.700 { 00:18:03.700 "name": "TLSTEST", 00:18:03.700 "trtype": "tcp", 00:18:03.700 "traddr": "10.0.0.2", 00:18:03.700 "adrfam": "ipv4", 00:18:03.700 "trsvcid": "4420", 00:18:03.700 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:03.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:03.700 "prchk_reftag": false, 00:18:03.700 "prchk_guard": false, 00:18:03.700 "hdgst": false, 00:18:03.700 "ddgst": false, 00:18:03.700 "psk": "/tmp/tmp.KQSforYgvC", 00:18:03.700 "method": "bdev_nvme_attach_controller", 00:18:03.700 "req_id": 1 00:18:03.700 } 00:18:03.700 Got JSON-RPC error response 00:18:03.700 response: 00:18:03.700 { 00:18:03.700 "code": -5, 00:18:03.700 "message": "Input/output error" 00:18:03.700 } 00:18:03.700 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3397787 00:18:03.700 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3397787 ']' 00:18:03.700 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3397787 00:18:03.700 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:03.701 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:03.701 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3397787 00:18:03.701 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:03.701 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:03.701 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3397787' 00:18:03.701 killing process with pid 3397787 00:18:03.701 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3397787 00:18:03.701 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.701 00:18:03.701 Latency(us) 00:18:03.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.701 =================================================================================================================== 00:18:03.701 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:03.701 [2024-07-24 23:55:34.298384] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:03.701 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3397787 00:18:03.958 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:03.958 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3397926 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3397926 /var/tmp/bdevperf.sock 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3397926 ']' 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.959 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.217 [2024-07-24 23:55:34.599505] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:04.217 [2024-07-24 23:55:34.599583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3397926 ] 00:18:04.217 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.217 [2024-07-24 23:55:34.656829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.217 [2024-07-24 23:55:34.759582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.474 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.474 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:04.474 23:55:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:04.732 [2024-07-24 23:55:35.108176] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:04.732 [2024-07-24 23:55:35.109668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2507770 (9): Bad file descriptor 00:18:04.732 [2024-07-24 23:55:35.110664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:04.732 [2024-07-24 23:55:35.110684] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:04.732 [2024-07-24 23:55:35.110716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:04.732 request: 00:18:04.732 { 00:18:04.732 "name": "TLSTEST", 00:18:04.732 "trtype": "tcp", 00:18:04.732 "traddr": "10.0.0.2", 00:18:04.732 "adrfam": "ipv4", 00:18:04.732 "trsvcid": "4420", 00:18:04.732 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.732 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.732 "prchk_reftag": false, 00:18:04.732 "prchk_guard": false, 00:18:04.732 "hdgst": false, 00:18:04.732 "ddgst": false, 00:18:04.732 "method": "bdev_nvme_attach_controller", 00:18:04.732 "req_id": 1 00:18:04.732 } 00:18:04.732 Got JSON-RPC error response 00:18:04.732 response: 00:18:04.732 { 00:18:04.732 "code": -5, 00:18:04.732 "message": "Input/output error" 00:18:04.732 } 00:18:04.732 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3397926 00:18:04.732 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3397926 ']' 00:18:04.732 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3397926 00:18:04.732 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:04.732 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.732 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3397926 00:18:04.732 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:04.732 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:04.732 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3397926' 00:18:04.732 killing process with pid 3397926 00:18:04.732 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3397926 00:18:04.732 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.732 00:18:04.732 Latency(us) 00:18:04.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.732 =================================================================================================================== 00:18:04.732 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.732 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3397926 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 3394437 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3394437 ']' 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3394437 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3394437 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3394437' 00:18:04.990 killing process with pid 3394437 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3394437 00:18:04.990 [2024-07-24 23:55:35.419201] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:04.990 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3394437 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.T8tnZHv9yR 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.T8tnZHv9yR 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:05.248 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:05.249 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.249 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3398077 00:18:05.249 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:05.249 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3398077 00:18:05.249 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3398077 ']' 00:18:05.249 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.249 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:05.249 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.249 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:05.249 23:55:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.249 [2024-07-24 23:55:35.793892] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:05.249 [2024-07-24 23:55:35.793959] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:05.249 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.249 [2024-07-24 23:55:35.854978] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.506 [2024-07-24 23:55:35.964878] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:05.506 [2024-07-24 23:55:35.964940] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:05.506 [2024-07-24 23:55:35.964968] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:05.506 [2024-07-24 23:55:35.964980] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:05.506 [2024-07-24 23:55:35.964990] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:05.506 [2024-07-24 23:55:35.965019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.507 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:05.507 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:05.507 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.507 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:05.507 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:05.507 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.507 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.T8tnZHv9yR 00:18:05.507 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.T8tnZHv9yR 00:18:05.507 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:06.071 [2024-07-24 23:55:36.382023] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:06.071 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:06.071 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:06.329 [2024-07-24 23:55:36.879398] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:06.329 [2024-07-24 23:55:36.879674] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:06.329 23:55:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:06.587 malloc0 00:18:06.588 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:06.845 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T8tnZHv9yR 00:18:07.103 [2024-07-24 23:55:37.709830] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.T8tnZHv9yR 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.T8tnZHv9yR' 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3398356 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3398356 /var/tmp/bdevperf.sock 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3398356 ']' 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:07.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:07.361 23:55:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:07.361 [2024-07-24 23:55:37.778422] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:07.361 [2024-07-24 23:55:37.778505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3398356 ] 00:18:07.361 EAL: No free 2048 kB hugepages reported on node 1 00:18:07.361 [2024-07-24 23:55:37.840825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.361 [2024-07-24 23:55:37.952940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:07.618 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:07.618 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:07.618 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T8tnZHv9yR 00:18:07.876 [2024-07-24 23:55:38.319189] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:07.876 [2024-07-24 23:55:38.319331] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:07.876 TLSTESTn1 00:18:07.876 23:55:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:08.135 Running I/O for 10 seconds... 00:18:18.093 00:18:18.093 Latency(us) 00:18:18.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.093 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:18.093 Verification LBA range: start 0x0 length 0x2000 00:18:18.093 TLSTESTn1 : 10.03 3413.60 13.33 0.00 0.00 37420.00 6092.42 69128.34 00:18:18.093 =================================================================================================================== 00:18:18.093 Total : 3413.60 13.33 0.00 0.00 37420.00 6092.42 69128.34 00:18:18.093 0 00:18:18.093 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:18.093 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 3398356 00:18:18.093 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3398356 ']' 00:18:18.093 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3398356 00:18:18.093 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:18.093 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.093 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3398356 00:18:18.093 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:18.093 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:18.093 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3398356' 00:18:18.093 killing process with pid 3398356 00:18:18.093 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3398356 00:18:18.093 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.093 00:18:18.093 Latency(us) 00:18:18.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.093 =================================================================================================================== 00:18:18.093 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:18.093 [2024-07-24 23:55:48.645403] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:18.093 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3398356 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.T8tnZHv9yR 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.T8tnZHv9yR 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.T8tnZHv9yR 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.T8tnZHv9yR 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.T8tnZHv9yR' 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3399684 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3399684 /var/tmp/bdevperf.sock 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3399684 ']' 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:18.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.351 23:55:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.351 [2024-07-24 23:55:48.956434] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:18.351 [2024-07-24 23:55:48.956516] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3399684 ] 00:18:18.609 EAL: No free 2048 kB hugepages reported on node 1 00:18:18.609 [2024-07-24 23:55:49.012996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.609 [2024-07-24 23:55:49.117003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.866 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.866 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:18.866 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T8tnZHv9yR 00:18:19.124 [2024-07-24 23:55:49.494109] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:19.124 [2024-07-24 23:55:49.494200] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:19.124 [2024-07-24 23:55:49.494216] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.T8tnZHv9yR 00:18:19.124 request: 00:18:19.124 { 00:18:19.124 "name": "TLSTEST", 00:18:19.124 "trtype": "tcp", 00:18:19.124 "traddr": "10.0.0.2", 00:18:19.124 "adrfam": "ipv4", 00:18:19.124 "trsvcid": "4420", 00:18:19.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.124 "prchk_reftag": false, 00:18:19.124 "prchk_guard": false, 00:18:19.124 "hdgst": false, 00:18:19.124 "ddgst": false, 00:18:19.124 "psk": "/tmp/tmp.T8tnZHv9yR", 00:18:19.124 "method": "bdev_nvme_attach_controller", 00:18:19.124 "req_id": 1 00:18:19.124 } 00:18:19.124 Got JSON-RPC error response 00:18:19.124 response: 00:18:19.124 { 00:18:19.124 "code": -1, 00:18:19.124 "message": "Operation not permitted" 00:18:19.124 } 00:18:19.124 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 3399684 00:18:19.124 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3399684 ']' 00:18:19.124 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3399684 00:18:19.124 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:19.124 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:19.124 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3399684 00:18:19.124 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:19.124 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:19.124 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3399684' 00:18:19.124 killing process with pid 3399684 00:18:19.124 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3399684 00:18:19.124 Received shutdown signal, test time was about 10.000000 seconds 00:18:19.124 00:18:19.124 Latency(us) 00:18:19.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:19.124 =================================================================================================================== 00:18:19.124 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:19.124 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3399684 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 3398077 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3398077 ']' 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3398077 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3398077 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3398077' 00:18:19.382 killing process with pid 3398077 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3398077 00:18:19.382 [2024-07-24 23:55:49.839363] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:19.382 23:55:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3398077 00:18:19.639 23:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:18:19.639 23:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:19.639 23:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:19.639 23:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.639 23:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3399832 00:18:19.639 23:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:19.639 23:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3399832 00:18:19.639 23:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3399832 ']' 00:18:19.639 23:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.639 23:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:19.639 23:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.640 23:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:19.640 23:55:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.640 [2024-07-24 23:55:50.188769] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:19.640 [2024-07-24 23:55:50.188848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.640 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.897 [2024-07-24 23:55:50.256712] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.897 [2024-07-24 23:55:50.368864] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.897 [2024-07-24 23:55:50.368925] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.897 [2024-07-24 23:55:50.368942] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:19.897 [2024-07-24 23:55:50.368956] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:19.897 [2024-07-24 23:55:50.368968] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.897 [2024-07-24 23:55:50.369000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.T8tnZHv9yR 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.T8tnZHv9yR 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.T8tnZHv9yR 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.T8tnZHv9yR 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:20.828 [2024-07-24 23:55:51.372536] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.828 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:21.086 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:21.343 [2024-07-24 23:55:51.853811] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:21.343 [2024-07-24 23:55:51.854048] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.343 23:55:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:21.600 malloc0 00:18:21.600 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:21.857 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T8tnZHv9yR 00:18:22.116 [2024-07-24 23:55:52.603483] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:22.116 [2024-07-24 23:55:52.603531] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:22.116 [2024-07-24 23:55:52.603577] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:22.116 request: 00:18:22.116 { 00:18:22.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.116 "host": "nqn.2016-06.io.spdk:host1", 00:18:22.116 "psk": "/tmp/tmp.T8tnZHv9yR", 00:18:22.116 "method": "nvmf_subsystem_add_host", 00:18:22.116 "req_id": 1 00:18:22.116 } 00:18:22.116 Got JSON-RPC error response 00:18:22.116 response: 00:18:22.116 { 00:18:22.116 "code": -32603, 00:18:22.116 "message": "Internal error" 00:18:22.116 } 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 3399832 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3399832 ']' 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3399832 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3399832 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3399832' 00:18:22.116 killing process with pid 3399832 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3399832 00:18:22.116 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3399832 00:18:22.428 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.T8tnZHv9yR 00:18:22.428 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:22.428 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.428 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:22.428 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.428 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3400140 00:18:22.428 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:22.428 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3400140 00:18:22.428 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3400140 ']' 00:18:22.428 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.428 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:22.428 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.428 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:22.428 23:55:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.428 [2024-07-24 23:55:52.985743] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:22.428 [2024-07-24 23:55:52.985830] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.690 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.690 [2024-07-24 23:55:53.049487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.690 [2024-07-24 23:55:53.155902] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.690 [2024-07-24 23:55:53.155951] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.690 [2024-07-24 23:55:53.155981] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.690 [2024-07-24 23:55:53.155993] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.690 [2024-07-24 23:55:53.156003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.690 [2024-07-24 23:55:53.156037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.690 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:22.690 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:22.690 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:22.690 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:22.690 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.955 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.955 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.T8tnZHv9yR 00:18:22.955 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.T8tnZHv9yR 00:18:22.955 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:22.955 [2024-07-24 23:55:53.538686] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.955 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:23.212 23:55:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:23.469 [2024-07-24 23:55:54.011943] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:23.469 [2024-07-24 23:55:54.012170] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:23.469 23:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:23.727 malloc0 00:18:23.727 23:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:23.985 23:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T8tnZHv9yR 00:18:24.243 [2024-07-24 23:55:54.753738] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:24.243 23:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=3400420 00:18:24.243 23:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:24.243 23:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:24.243 23:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 3400420 /var/tmp/bdevperf.sock 00:18:24.243 23:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3400420 ']' 00:18:24.243 23:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:24.243 23:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:24.243 23:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:24.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:24.243 23:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:24.243 23:55:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.243 [2024-07-24 23:55:54.809710] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:24.243 [2024-07-24 23:55:54.809781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400420 ] 00:18:24.243 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.501 [2024-07-24 23:55:54.868165] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.501 [2024-07-24 23:55:54.980273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.501 23:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.501 23:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:24.501 23:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T8tnZHv9yR 00:18:24.760 [2024-07-24 23:55:55.321650] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:24.760 [2024-07-24 23:55:55.321802] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:25.017 TLSTESTn1 00:18:25.017 23:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:25.275 23:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:25.275 "subsystems": [ 00:18:25.275 { 00:18:25.275 "subsystem": "keyring", 00:18:25.275 "config": [] 00:18:25.275 }, 00:18:25.275 { 00:18:25.275 "subsystem": "iobuf", 00:18:25.275 "config": [ 00:18:25.275 { 00:18:25.275 "method": "iobuf_set_options", 00:18:25.275 "params": { 00:18:25.275 "small_pool_count": 8192, 00:18:25.275 "large_pool_count": 1024, 00:18:25.275 "small_bufsize": 8192, 00:18:25.275 "large_bufsize": 135168 00:18:25.275 } 00:18:25.275 } 00:18:25.275 ] 00:18:25.275 }, 00:18:25.275 { 00:18:25.275 "subsystem": "sock", 00:18:25.275 "config": [ 00:18:25.275 { 00:18:25.275 "method": "sock_set_default_impl", 00:18:25.275 "params": { 00:18:25.275 "impl_name": "posix" 00:18:25.275 } 00:18:25.275 }, 00:18:25.275 { 00:18:25.275 "method": "sock_impl_set_options", 00:18:25.275 "params": { 00:18:25.275 "impl_name": "ssl", 00:18:25.275 "recv_buf_size": 4096, 00:18:25.275 "send_buf_size": 4096, 00:18:25.275 "enable_recv_pipe": true, 00:18:25.275 "enable_quickack": false, 00:18:25.275 "enable_placement_id": 0, 00:18:25.275 "enable_zerocopy_send_server": true, 00:18:25.275 "enable_zerocopy_send_client": false, 00:18:25.275 "zerocopy_threshold": 0, 00:18:25.275 "tls_version": 0, 00:18:25.275 "enable_ktls": false 00:18:25.275 } 00:18:25.275 }, 00:18:25.275 { 00:18:25.275 "method": "sock_impl_set_options", 00:18:25.275 "params": { 00:18:25.275 "impl_name": "posix", 00:18:25.275 "recv_buf_size": 2097152, 00:18:25.275 "send_buf_size": 2097152, 00:18:25.275 "enable_recv_pipe": true, 00:18:25.275 "enable_quickack": false, 00:18:25.275 "enable_placement_id": 0, 00:18:25.275 "enable_zerocopy_send_server": true, 00:18:25.275 "enable_zerocopy_send_client": false, 00:18:25.275 "zerocopy_threshold": 0, 00:18:25.275 "tls_version": 0, 00:18:25.275 "enable_ktls": false 00:18:25.275 } 00:18:25.275 } 00:18:25.275 ] 00:18:25.275 }, 00:18:25.275 { 00:18:25.275 "subsystem": "vmd", 00:18:25.275 "config": [] 00:18:25.275 }, 00:18:25.275 { 00:18:25.275 "subsystem": "accel", 00:18:25.275 "config": [ 00:18:25.275 { 00:18:25.275 "method": "accel_set_options", 00:18:25.275 "params": { 00:18:25.275 "small_cache_size": 128, 00:18:25.275 "large_cache_size": 16, 00:18:25.275 "task_count": 2048, 00:18:25.275 "sequence_count": 2048, 00:18:25.275 "buf_count": 2048 00:18:25.275 } 00:18:25.275 } 00:18:25.275 ] 00:18:25.275 }, 00:18:25.275 { 00:18:25.275 "subsystem": "bdev", 00:18:25.275 "config": [ 00:18:25.275 { 00:18:25.275 "method": "bdev_set_options", 00:18:25.275 "params": { 00:18:25.275 "bdev_io_pool_size": 65535, 00:18:25.275 "bdev_io_cache_size": 256, 00:18:25.275 "bdev_auto_examine": true, 00:18:25.275 "iobuf_small_cache_size": 128, 00:18:25.275 "iobuf_large_cache_size": 16 00:18:25.275 } 00:18:25.275 }, 00:18:25.275 { 00:18:25.275 "method": "bdev_raid_set_options", 00:18:25.275 "params": { 00:18:25.275 "process_window_size_kb": 1024, 00:18:25.275 "process_max_bandwidth_mb_sec": 0 00:18:25.275 } 00:18:25.275 }, 00:18:25.275 { 00:18:25.275 "method": "bdev_iscsi_set_options", 00:18:25.275 "params": { 00:18:25.275 "timeout_sec": 30 00:18:25.275 } 00:18:25.275 }, 00:18:25.275 { 00:18:25.275 "method": "bdev_nvme_set_options", 00:18:25.275 "params": { 00:18:25.275 "action_on_timeout": "none", 00:18:25.275 "timeout_us": 0, 00:18:25.275 "timeout_admin_us": 0, 00:18:25.275 "keep_alive_timeout_ms": 10000, 00:18:25.275 "arbitration_burst": 0, 00:18:25.275 "low_priority_weight": 0, 00:18:25.275 "medium_priority_weight": 0, 00:18:25.275 "high_priority_weight": 0, 00:18:25.275 "nvme_adminq_poll_period_us": 10000, 00:18:25.275 "nvme_ioq_poll_period_us": 0, 00:18:25.275 "io_queue_requests": 0, 00:18:25.275 "delay_cmd_submit": true, 00:18:25.275 "transport_retry_count": 4, 00:18:25.275 "bdev_retry_count": 3, 00:18:25.275 "transport_ack_timeout": 0, 00:18:25.275 "ctrlr_loss_timeout_sec": 0, 00:18:25.275 "reconnect_delay_sec": 0, 00:18:25.275 "fast_io_fail_timeout_sec": 0, 00:18:25.275 "disable_auto_failback": false, 00:18:25.275 "generate_uuids": false, 00:18:25.275 "transport_tos": 0, 00:18:25.275 "nvme_error_stat": false, 00:18:25.275 "rdma_srq_size": 0, 00:18:25.275 "io_path_stat": false, 00:18:25.276 "allow_accel_sequence": false, 00:18:25.276 "rdma_max_cq_size": 0, 00:18:25.276 "rdma_cm_event_timeout_ms": 0, 00:18:25.276 "dhchap_digests": [ 00:18:25.276 "sha256", 00:18:25.276 "sha384", 00:18:25.276 "sha512" 00:18:25.276 ], 00:18:25.276 "dhchap_dhgroups": [ 00:18:25.276 "null", 00:18:25.276 "ffdhe2048", 00:18:25.276 "ffdhe3072", 00:18:25.276 "ffdhe4096", 00:18:25.276 "ffdhe6144", 00:18:25.276 "ffdhe8192" 00:18:25.276 ] 00:18:25.276 } 00:18:25.276 }, 00:18:25.276 { 00:18:25.276 "method": "bdev_nvme_set_hotplug", 00:18:25.276 "params": { 00:18:25.276 "period_us": 100000, 00:18:25.276 "enable": false 00:18:25.276 } 00:18:25.276 }, 00:18:25.276 { 00:18:25.276 "method": "bdev_malloc_create", 00:18:25.276 "params": { 00:18:25.276 "name": "malloc0", 00:18:25.276 "num_blocks": 8192, 00:18:25.276 "block_size": 4096, 00:18:25.276 "physical_block_size": 4096, 00:18:25.276 "uuid": "5aac89f5-e5fc-4ca0-a9b8-adfed22139d7", 00:18:25.276 "optimal_io_boundary": 0, 00:18:25.276 "md_size": 0, 00:18:25.276 "dif_type": 0, 00:18:25.276 "dif_is_head_of_md": false, 00:18:25.276 "dif_pi_format": 0 00:18:25.276 } 00:18:25.276 }, 00:18:25.276 { 00:18:25.276 "method": "bdev_wait_for_examine" 00:18:25.276 } 00:18:25.276 ] 00:18:25.276 }, 00:18:25.276 { 00:18:25.276 "subsystem": "nbd", 00:18:25.276 "config": [] 00:18:25.276 }, 00:18:25.276 { 00:18:25.276 "subsystem": "scheduler", 00:18:25.276 "config": [ 00:18:25.276 { 00:18:25.276 "method": "framework_set_scheduler", 00:18:25.276 "params": { 00:18:25.276 "name": "static" 00:18:25.276 } 00:18:25.276 } 00:18:25.276 ] 00:18:25.276 }, 00:18:25.276 { 00:18:25.276 "subsystem": "nvmf", 00:18:25.276 "config": [ 00:18:25.276 { 00:18:25.276 "method": "nvmf_set_config", 00:18:25.276 "params": { 00:18:25.276 "discovery_filter": "match_any", 00:18:25.276 "admin_cmd_passthru": { 00:18:25.276 "identify_ctrlr": false 00:18:25.276 } 00:18:25.276 } 00:18:25.276 }, 00:18:25.276 { 00:18:25.276 "method": "nvmf_set_max_subsystems", 00:18:25.276 "params": { 00:18:25.276 "max_subsystems": 1024 00:18:25.276 } 00:18:25.276 }, 00:18:25.276 { 00:18:25.276 "method": "nvmf_set_crdt", 00:18:25.276 "params": { 00:18:25.276 "crdt1": 0, 00:18:25.276 "crdt2": 0, 00:18:25.276 "crdt3": 0 00:18:25.276 } 00:18:25.276 }, 00:18:25.276 { 00:18:25.276 "method": "nvmf_create_transport", 00:18:25.276 "params": { 00:18:25.276 "trtype": "TCP", 00:18:25.276 "max_queue_depth": 128, 00:18:25.276 "max_io_qpairs_per_ctrlr": 127, 00:18:25.276 "in_capsule_data_size": 4096, 00:18:25.276 "max_io_size": 131072, 00:18:25.276 "io_unit_size": 131072, 00:18:25.276 "max_aq_depth": 128, 00:18:25.276 "num_shared_buffers": 511, 00:18:25.276 "buf_cache_size": 4294967295, 00:18:25.276 "dif_insert_or_strip": false, 00:18:25.276 "zcopy": false, 00:18:25.276 "c2h_success": false, 00:18:25.276 "sock_priority": 0, 00:18:25.276 "abort_timeout_sec": 1, 00:18:25.276 "ack_timeout": 0, 00:18:25.276 "data_wr_pool_size": 0 00:18:25.276 } 00:18:25.276 }, 00:18:25.276 { 00:18:25.276 "method": "nvmf_create_subsystem", 00:18:25.276 "params": { 00:18:25.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.276 "allow_any_host": false, 00:18:25.276 "serial_number": "SPDK00000000000001", 00:18:25.276 "model_number": "SPDK bdev Controller", 00:18:25.276 "max_namespaces": 10, 00:18:25.276 "min_cntlid": 1, 00:18:25.276 "max_cntlid": 65519, 00:18:25.276 "ana_reporting": false 00:18:25.276 } 00:18:25.276 }, 00:18:25.276 { 00:18:25.276 "method": "nvmf_subsystem_add_host", 00:18:25.276 "params": { 00:18:25.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.276 "host": "nqn.2016-06.io.spdk:host1", 00:18:25.276 "psk": "/tmp/tmp.T8tnZHv9yR" 00:18:25.276 } 00:18:25.276 }, 00:18:25.276 { 00:18:25.276 "method": "nvmf_subsystem_add_ns", 00:18:25.276 "params": { 00:18:25.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.276 "namespace": { 00:18:25.276 "nsid": 1, 00:18:25.276 "bdev_name": "malloc0", 00:18:25.276 "nguid": "5AAC89F5E5FC4CA0A9B8ADFED22139D7", 00:18:25.276 "uuid": "5aac89f5-e5fc-4ca0-a9b8-adfed22139d7", 00:18:25.276 "no_auto_visible": false 00:18:25.276 } 00:18:25.276 } 00:18:25.276 }, 00:18:25.276 { 00:18:25.276 "method": "nvmf_subsystem_add_listener", 00:18:25.276 "params": { 00:18:25.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.276 "listen_address": { 00:18:25.276 "trtype": "TCP", 00:18:25.276 "adrfam": "IPv4", 00:18:25.276 "traddr": "10.0.0.2", 00:18:25.276 "trsvcid": "4420" 00:18:25.276 }, 00:18:25.276 "secure_channel": true 00:18:25.276 } 00:18:25.276 } 00:18:25.276 ] 00:18:25.276 } 00:18:25.276 ] 00:18:25.276 }' 00:18:25.276 23:55:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:25.534 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:25.534 "subsystems": [ 00:18:25.534 { 00:18:25.534 "subsystem": "keyring", 00:18:25.534 "config": [] 00:18:25.534 }, 00:18:25.534 { 00:18:25.534 "subsystem": "iobuf", 00:18:25.534 "config": [ 00:18:25.534 { 00:18:25.534 "method": "iobuf_set_options", 00:18:25.534 "params": { 00:18:25.534 "small_pool_count": 8192, 00:18:25.534 "large_pool_count": 1024, 00:18:25.534 "small_bufsize": 8192, 00:18:25.534 "large_bufsize": 135168 00:18:25.534 } 00:18:25.534 } 00:18:25.534 ] 00:18:25.534 }, 00:18:25.534 { 00:18:25.534 "subsystem": "sock", 00:18:25.534 "config": [ 00:18:25.534 { 00:18:25.534 "method": "sock_set_default_impl", 00:18:25.534 "params": { 00:18:25.534 "impl_name": "posix" 00:18:25.534 } 00:18:25.534 }, 00:18:25.534 { 00:18:25.534 "method": "sock_impl_set_options", 00:18:25.534 "params": { 00:18:25.534 "impl_name": "ssl", 00:18:25.534 "recv_buf_size": 4096, 00:18:25.534 "send_buf_size": 4096, 00:18:25.534 "enable_recv_pipe": true, 00:18:25.534 "enable_quickack": false, 00:18:25.534 "enable_placement_id": 0, 00:18:25.534 "enable_zerocopy_send_server": true, 00:18:25.534 "enable_zerocopy_send_client": false, 00:18:25.534 "zerocopy_threshold": 0, 00:18:25.534 "tls_version": 0, 00:18:25.534 "enable_ktls": false 00:18:25.534 } 00:18:25.534 }, 00:18:25.534 { 00:18:25.534 "method": "sock_impl_set_options", 00:18:25.534 "params": { 00:18:25.534 "impl_name": "posix", 00:18:25.534 "recv_buf_size": 2097152, 00:18:25.534 "send_buf_size": 2097152, 00:18:25.534 "enable_recv_pipe": true, 00:18:25.534 "enable_quickack": false, 00:18:25.534 "enable_placement_id": 0, 00:18:25.534 "enable_zerocopy_send_server": true, 00:18:25.534 "enable_zerocopy_send_client": false, 00:18:25.534 "zerocopy_threshold": 0, 00:18:25.534 "tls_version": 0, 00:18:25.534 "enable_ktls": false 00:18:25.534 } 00:18:25.534 } 00:18:25.534 ] 00:18:25.534 }, 00:18:25.534 { 00:18:25.534 "subsystem": "vmd", 00:18:25.534 "config": [] 00:18:25.534 }, 00:18:25.534 { 00:18:25.534 "subsystem": "accel", 00:18:25.534 "config": [ 00:18:25.534 { 00:18:25.534 "method": "accel_set_options", 00:18:25.534 "params": { 00:18:25.534 "small_cache_size": 128, 00:18:25.534 "large_cache_size": 16, 00:18:25.534 "task_count": 2048, 00:18:25.534 "sequence_count": 2048, 00:18:25.534 "buf_count": 2048 00:18:25.534 } 00:18:25.534 } 00:18:25.534 ] 00:18:25.534 }, 00:18:25.534 { 00:18:25.534 "subsystem": "bdev", 00:18:25.534 "config": [ 00:18:25.534 { 00:18:25.534 "method": "bdev_set_options", 00:18:25.534 "params": { 00:18:25.534 "bdev_io_pool_size": 65535, 00:18:25.534 "bdev_io_cache_size": 256, 00:18:25.534 "bdev_auto_examine": true, 00:18:25.534 "iobuf_small_cache_size": 128, 00:18:25.534 "iobuf_large_cache_size": 16 00:18:25.534 } 00:18:25.534 }, 00:18:25.534 { 00:18:25.534 "method": "bdev_raid_set_options", 00:18:25.534 "params": { 00:18:25.534 "process_window_size_kb": 1024, 00:18:25.534 "process_max_bandwidth_mb_sec": 0 00:18:25.534 } 00:18:25.534 }, 00:18:25.534 { 00:18:25.534 "method": "bdev_iscsi_set_options", 00:18:25.534 "params": { 00:18:25.534 "timeout_sec": 30 00:18:25.534 } 00:18:25.534 }, 00:18:25.534 { 00:18:25.534 "method": "bdev_nvme_set_options", 00:18:25.534 "params": { 00:18:25.534 "action_on_timeout": "none", 00:18:25.534 "timeout_us": 0, 00:18:25.534 "timeout_admin_us": 0, 00:18:25.534 "keep_alive_timeout_ms": 10000, 00:18:25.534 "arbitration_burst": 0, 00:18:25.534 "low_priority_weight": 0, 00:18:25.534 "medium_priority_weight": 0, 00:18:25.534 "high_priority_weight": 0, 00:18:25.534 "nvme_adminq_poll_period_us": 10000, 00:18:25.534 "nvme_ioq_poll_period_us": 0, 00:18:25.534 "io_queue_requests": 512, 00:18:25.534 "delay_cmd_submit": true, 00:18:25.534 "transport_retry_count": 4, 00:18:25.534 "bdev_retry_count": 3, 00:18:25.534 "transport_ack_timeout": 0, 00:18:25.534 "ctrlr_loss_timeout_sec": 0, 00:18:25.534 "reconnect_delay_sec": 0, 00:18:25.534 "fast_io_fail_timeout_sec": 0, 00:18:25.534 "disable_auto_failback": false, 00:18:25.534 "generate_uuids": false, 00:18:25.534 "transport_tos": 0, 00:18:25.534 "nvme_error_stat": false, 00:18:25.534 "rdma_srq_size": 0, 00:18:25.534 "io_path_stat": false, 00:18:25.534 "allow_accel_sequence": false, 00:18:25.534 "rdma_max_cq_size": 0, 00:18:25.534 "rdma_cm_event_timeout_ms": 0, 00:18:25.534 "dhchap_digests": [ 00:18:25.534 "sha256", 00:18:25.534 "sha384", 00:18:25.534 "sha512" 00:18:25.534 ], 00:18:25.534 "dhchap_dhgroups": [ 00:18:25.534 "null", 00:18:25.534 "ffdhe2048", 00:18:25.535 "ffdhe3072", 00:18:25.535 "ffdhe4096", 00:18:25.535 "ffdhe6144", 00:18:25.535 "ffdhe8192" 00:18:25.535 ] 00:18:25.535 } 00:18:25.535 }, 00:18:25.535 { 00:18:25.535 "method": "bdev_nvme_attach_controller", 00:18:25.535 "params": { 00:18:25.535 "name": "TLSTEST", 00:18:25.535 "trtype": "TCP", 00:18:25.535 "adrfam": "IPv4", 00:18:25.535 "traddr": "10.0.0.2", 00:18:25.535 "trsvcid": "4420", 00:18:25.535 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.535 "prchk_reftag": false, 00:18:25.535 "prchk_guard": false, 00:18:25.535 "ctrlr_loss_timeout_sec": 0, 00:18:25.535 "reconnect_delay_sec": 0, 00:18:25.535 "fast_io_fail_timeout_sec": 0, 00:18:25.535 "psk": "/tmp/tmp.T8tnZHv9yR", 00:18:25.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:25.535 "hdgst": false, 00:18:25.535 "ddgst": false 00:18:25.535 } 00:18:25.535 }, 00:18:25.535 { 00:18:25.535 "method": "bdev_nvme_set_hotplug", 00:18:25.535 "params": { 00:18:25.535 "period_us": 100000, 00:18:25.535 "enable": false 00:18:25.535 } 00:18:25.535 }, 00:18:25.535 { 00:18:25.535 "method": "bdev_wait_for_examine" 00:18:25.535 } 00:18:25.535 ] 00:18:25.535 }, 00:18:25.535 { 00:18:25.535 "subsystem": "nbd", 00:18:25.535 "config": [] 00:18:25.535 } 00:18:25.535 ] 00:18:25.535 }' 00:18:25.535 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 3400420 00:18:25.535 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3400420 ']' 00:18:25.535 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3400420 00:18:25.535 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:25.535 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.535 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3400420 00:18:25.535 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:25.535 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:25.535 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3400420' 00:18:25.535 killing process with pid 3400420 00:18:25.535 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3400420 00:18:25.535 Received shutdown signal, test time was about 10.000000 seconds 00:18:25.535 00:18:25.535 Latency(us) 00:18:25.535 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.535 =================================================================================================================== 00:18:25.535 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:25.535 [2024-07-24 23:55:56.115922] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:25.535 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3400420 00:18:25.792 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 3400140 00:18:25.792 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3400140 ']' 00:18:25.792 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3400140 00:18:25.792 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:25.792 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.792 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3400140 00:18:25.792 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:25.792 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:25.793 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3400140' 00:18:25.793 killing process with pid 3400140 00:18:25.793 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3400140 00:18:25.793 [2024-07-24 23:55:56.387078] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:25.793 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3400140 00:18:26.359 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:26.359 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:26.359 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:26.359 "subsystems": [ 00:18:26.359 { 00:18:26.359 "subsystem": "keyring", 00:18:26.359 "config": [] 00:18:26.359 }, 00:18:26.359 { 00:18:26.359 "subsystem": "iobuf", 00:18:26.359 "config": [ 00:18:26.359 { 00:18:26.359 "method": "iobuf_set_options", 00:18:26.359 "params": { 00:18:26.359 "small_pool_count": 8192, 00:18:26.359 "large_pool_count": 1024, 00:18:26.359 "small_bufsize": 8192, 00:18:26.359 "large_bufsize": 135168 00:18:26.359 } 00:18:26.359 } 00:18:26.359 ] 00:18:26.359 }, 00:18:26.359 { 00:18:26.359 "subsystem": "sock", 00:18:26.359 "config": [ 00:18:26.359 { 00:18:26.359 "method": "sock_set_default_impl", 00:18:26.359 "params": { 00:18:26.359 "impl_name": "posix" 00:18:26.359 } 00:18:26.359 }, 00:18:26.359 { 00:18:26.359 "method": "sock_impl_set_options", 00:18:26.359 "params": { 00:18:26.359 "impl_name": "ssl", 00:18:26.359 "recv_buf_size": 4096, 00:18:26.359 "send_buf_size": 4096, 00:18:26.359 "enable_recv_pipe": true, 00:18:26.359 "enable_quickack": false, 00:18:26.359 "enable_placement_id": 0, 00:18:26.359 "enable_zerocopy_send_server": true, 00:18:26.359 "enable_zerocopy_send_client": false, 00:18:26.359 "zerocopy_threshold": 0, 00:18:26.359 "tls_version": 0, 00:18:26.359 "enable_ktls": false 00:18:26.359 } 00:18:26.359 }, 00:18:26.359 { 00:18:26.359 "method": "sock_impl_set_options", 00:18:26.359 "params": { 00:18:26.359 "impl_name": "posix", 00:18:26.359 "recv_buf_size": 2097152, 00:18:26.359 "send_buf_size": 2097152, 00:18:26.359 "enable_recv_pipe": true, 00:18:26.359 "enable_quickack": false, 00:18:26.359 "enable_placement_id": 0, 00:18:26.359 "enable_zerocopy_send_server": true, 00:18:26.359 "enable_zerocopy_send_client": false, 00:18:26.359 "zerocopy_threshold": 0, 00:18:26.359 "tls_version": 0, 00:18:26.359 "enable_ktls": false 00:18:26.359 } 00:18:26.359 } 00:18:26.359 ] 00:18:26.359 }, 00:18:26.359 { 00:18:26.359 "subsystem": "vmd", 00:18:26.359 "config": [] 00:18:26.359 }, 00:18:26.359 { 00:18:26.359 "subsystem": "accel", 00:18:26.359 "config": [ 00:18:26.359 { 00:18:26.359 "method": "accel_set_options", 00:18:26.359 "params": { 00:18:26.359 "small_cache_size": 128, 00:18:26.359 "large_cache_size": 16, 00:18:26.359 "task_count": 2048, 00:18:26.359 "sequence_count": 2048, 00:18:26.359 "buf_count": 2048 00:18:26.359 } 00:18:26.359 } 00:18:26.359 ] 00:18:26.359 }, 00:18:26.359 { 00:18:26.359 "subsystem": "bdev", 00:18:26.359 "config": [ 00:18:26.359 { 00:18:26.359 "method": "bdev_set_options", 00:18:26.359 "params": { 00:18:26.359 "bdev_io_pool_size": 65535, 00:18:26.359 "bdev_io_cache_size": 256, 00:18:26.359 "bdev_auto_examine": true, 00:18:26.359 "iobuf_small_cache_size": 128, 00:18:26.359 "iobuf_large_cache_size": 16 00:18:26.359 } 00:18:26.359 }, 00:18:26.359 { 00:18:26.359 "method": "bdev_raid_set_options", 00:18:26.359 "params": { 00:18:26.359 "process_window_size_kb": 1024, 00:18:26.359 "process_max_bandwidth_mb_sec": 0 00:18:26.359 } 00:18:26.359 }, 00:18:26.359 { 00:18:26.359 "method": "bdev_iscsi_set_options", 00:18:26.359 "params": { 00:18:26.359 "timeout_sec": 30 00:18:26.359 } 00:18:26.359 }, 00:18:26.359 { 00:18:26.359 "method": "bdev_nvme_set_options", 00:18:26.359 "params": { 00:18:26.359 "action_on_timeout": "none", 00:18:26.359 "timeout_us": 0, 00:18:26.359 "timeout_admin_us": 0, 00:18:26.359 "keep_alive_timeout_ms": 10000, 00:18:26.359 "arbitration_burst": 0, 00:18:26.359 "low_priority_weight": 0, 00:18:26.359 "medium_priority_weight": 0, 00:18:26.359 "high_priority_weight": 0, 00:18:26.359 "nvme_adminq_poll_period_us": 10000, 00:18:26.359 "nvme_ioq_poll_period_us": 0, 00:18:26.359 "io_queue_requests": 0, 00:18:26.359 "delay_cmd_submit": true, 00:18:26.359 "transport_retry_count": 4, 00:18:26.360 "bdev_retry_count": 3, 00:18:26.360 "transport_ack_timeout": 0, 00:18:26.360 "ctrlr_loss_timeout_sec": 0, 00:18:26.360 "reconnect_delay_sec": 0, 00:18:26.360 "fast_io_fail_timeout_sec": 0, 00:18:26.360 "disable_auto_failback": false, 00:18:26.360 "generate_uuids": false, 00:18:26.360 "transport_tos": 0, 00:18:26.360 "nvme_error_stat": false, 00:18:26.360 "rdma_srq_size": 0, 00:18:26.360 "io_path_stat": false, 00:18:26.360 "allow_accel_sequence": false, 00:18:26.360 "rdma_max_cq_size": 0, 00:18:26.360 "rdma_cm_event_timeout_ms": 0, 00:18:26.360 "dhchap_digests": [ 00:18:26.360 "sha256", 00:18:26.360 "sha384", 00:18:26.360 "sha512" 00:18:26.360 ], 00:18:26.360 "dhchap_dhgroups": [ 00:18:26.360 "null", 00:18:26.360 "ffdhe2048", 00:18:26.360 "ffdhe3072", 00:18:26.360 "ffdhe4096", 00:18:26.360 "ffdhe6144", 00:18:26.360 "ffdhe8192" 00:18:26.360 ] 00:18:26.360 } 00:18:26.360 }, 00:18:26.360 { 00:18:26.360 "method": "bdev_nvme_set_hotplug", 00:18:26.360 "params": { 00:18:26.360 "period_us": 100000, 00:18:26.360 "enable": false 00:18:26.360 } 00:18:26.360 }, 00:18:26.360 { 00:18:26.360 "method": "bdev_malloc_create", 00:18:26.360 "params": { 00:18:26.360 "name": "malloc0", 00:18:26.360 "num_blocks": 8192, 00:18:26.360 "block_size": 4096, 00:18:26.360 "physical_block_size": 4096, 00:18:26.360 "uuid": "5aac89f5-e5fc-4ca0-a9b8-adfed22139d7", 00:18:26.360 "optimal_io_boundary": 0, 00:18:26.360 "md_size": 0, 00:18:26.360 "dif_type": 0, 00:18:26.360 "dif_is_head_of_md": false, 00:18:26.360 "dif_pi_format": 0 00:18:26.360 } 00:18:26.360 }, 00:18:26.360 { 00:18:26.360 "method": "bdev_wait_for_examine" 00:18:26.360 } 00:18:26.360 ] 00:18:26.360 }, 00:18:26.360 { 00:18:26.360 "subsystem": "nbd", 00:18:26.360 "config": [] 00:18:26.360 }, 00:18:26.360 { 00:18:26.360 "subsystem": "scheduler", 00:18:26.360 "config": [ 00:18:26.360 { 00:18:26.360 "method": "framework_set_scheduler", 00:18:26.360 "params": { 00:18:26.360 "name": "static" 00:18:26.360 } 00:18:26.360 } 00:18:26.360 ] 00:18:26.360 }, 00:18:26.360 { 00:18:26.360 "subsystem": "nvmf", 00:18:26.360 "config": [ 00:18:26.360 { 00:18:26.360 "method": "nvmf_set_config", 00:18:26.360 "params": { 00:18:26.360 "discovery_filter": "match_any", 00:18:26.360 "admin_cmd_passthru": { 00:18:26.360 "identify_ctrlr": false 00:18:26.360 } 00:18:26.360 } 00:18:26.360 }, 00:18:26.360 { 00:18:26.360 "method": "nvmf_set_max_subsystems", 00:18:26.360 "params": { 00:18:26.360 "max_subsystems": 1024 00:18:26.360 } 00:18:26.360 }, 00:18:26.360 { 00:18:26.360 "method": "nvmf_set_crdt", 00:18:26.360 "params": { 00:18:26.360 "crdt1": 0, 00:18:26.360 "crdt2": 0, 00:18:26.360 "crdt3": 0 00:18:26.360 } 00:18:26.360 }, 00:18:26.360 { 00:18:26.360 "method": "nvmf_create_transport", 00:18:26.360 "params": { 00:18:26.360 "trtype": "TCP", 00:18:26.360 "max_queue_depth": 128, 00:18:26.360 "max_io_qpairs_per_ctrlr": 127, 00:18:26.360 "in_capsule_data_size": 4096, 00:18:26.360 "max_io_size": 131072, 00:18:26.360 "io_unit_size": 131072, 00:18:26.360 "max_aq_depth": 128, 00:18:26.360 "num_shared_buffers": 511, 00:18:26.360 "buf_cache_size": 4294967295, 00:18:26.360 "dif_insert_or_strip": false, 00:18:26.360 "zcopy": false, 00:18:26.360 "c2h_success": false, 00:18:26.360 "sock_priority": 0, 00:18:26.360 "abort_timeout_sec": 1, 00:18:26.360 "ack_timeout": 0, 00:18:26.360 "data_wr_pool_size": 0 00:18:26.360 } 00:18:26.360 }, 00:18:26.360 { 00:18:26.360 "method": "nvmf_create_subsystem", 00:18:26.360 "params": { 00:18:26.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.360 "allow_any_host": false, 00:18:26.360 "serial_number": "SPDK00000000000001", 00:18:26.360 "model_number": "SPDK bdev Controller", 00:18:26.360 "max_namespaces": 10, 00:18:26.360 "min_cntlid": 1, 00:18:26.360 "max_cntlid": 65519, 00:18:26.360 "ana_reporting": false 00:18:26.360 } 00:18:26.360 }, 00:18:26.360 { 00:18:26.360 "method": "nvmf_subsystem_add_host", 00:18:26.360 "params": { 00:18:26.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.360 "host": "nqn.2016-06.io.spdk:host1", 00:18:26.360 "psk": "/tmp/tmp.T8tnZHv9yR" 00:18:26.360 } 00:18:26.360 }, 00:18:26.360 { 00:18:26.360 "method": "nvmf_subsystem_add_ns", 00:18:26.360 "params": { 00:18:26.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.360 "namespace": { 00:18:26.360 "nsid": 1, 00:18:26.360 "bdev_name": "malloc0", 00:18:26.360 "nguid": "5AAC89F5E5FC4CA0A9B8ADFED22139D7", 00:18:26.360 "uuid": "5aac89f5-e5fc-4ca0-a9b8-adfed22139d7", 00:18:26.360 "no_auto_visible": false 00:18:26.360 } 00:18:26.360 } 00:18:26.360 }, 00:18:26.360 { 00:18:26.360 "method": "nvmf_subsystem_add_listener", 00:18:26.360 "params": { 00:18:26.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:26.360 "listen_address": { 00:18:26.360 "trtype": "TCP", 00:18:26.360 "adrfam": "IPv4", 00:18:26.360 "traddr": "10.0.0.2", 00:18:26.360 "trsvcid": "4420" 00:18:26.360 }, 00:18:26.360 "secure_channel": true 00:18:26.360 } 00:18:26.360 } 00:18:26.360 ] 00:18:26.360 } 00:18:26.360 ] 00:18:26.360 }' 00:18:26.360 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:26.360 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.360 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3400584 00:18:26.360 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:26.360 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3400584 00:18:26.360 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3400584 ']' 00:18:26.360 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.360 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:26.360 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.360 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:26.360 23:55:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:26.360 [2024-07-24 23:55:56.724457] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:26.360 [2024-07-24 23:55:56.724548] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.360 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.360 [2024-07-24 23:55:56.792372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.360 [2024-07-24 23:55:56.907749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.360 [2024-07-24 23:55:56.907810] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.360 [2024-07-24 23:55:56.907826] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.360 [2024-07-24 23:55:56.907839] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.360 [2024-07-24 23:55:56.907851] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.360 [2024-07-24 23:55:56.907935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:26.618 [2024-07-24 23:55:57.148094] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.618 [2024-07-24 23:55:57.169954] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:26.618 [2024-07-24 23:55:57.186011] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:26.618 [2024-07-24 23:55:57.186295] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.185 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:27.185 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:27.185 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:27.185 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:27.185 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.185 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.185 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=3400735 00:18:27.185 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 3400735 /var/tmp/bdevperf.sock 00:18:27.185 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3400735 ']' 00:18:27.185 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.185 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:27.185 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:27.185 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:27.185 "subsystems": [ 00:18:27.185 { 00:18:27.185 "subsystem": "keyring", 00:18:27.185 "config": [] 00:18:27.185 }, 00:18:27.185 { 00:18:27.185 "subsystem": "iobuf", 00:18:27.185 "config": [ 00:18:27.185 { 00:18:27.185 "method": "iobuf_set_options", 00:18:27.185 "params": { 00:18:27.185 "small_pool_count": 8192, 00:18:27.185 "large_pool_count": 1024, 00:18:27.185 "small_bufsize": 8192, 00:18:27.185 "large_bufsize": 135168 00:18:27.185 } 00:18:27.185 } 00:18:27.185 ] 00:18:27.185 }, 00:18:27.185 { 00:18:27.185 "subsystem": "sock", 00:18:27.185 "config": [ 00:18:27.185 { 00:18:27.185 "method": "sock_set_default_impl", 00:18:27.185 "params": { 00:18:27.185 "impl_name": "posix" 00:18:27.185 } 00:18:27.185 }, 00:18:27.185 { 00:18:27.185 "method": "sock_impl_set_options", 00:18:27.185 "params": { 00:18:27.185 "impl_name": "ssl", 00:18:27.185 "recv_buf_size": 4096, 00:18:27.185 "send_buf_size": 4096, 00:18:27.185 "enable_recv_pipe": true, 00:18:27.185 "enable_quickack": false, 00:18:27.185 "enable_placement_id": 0, 00:18:27.185 "enable_zerocopy_send_server": true, 00:18:27.185 "enable_zerocopy_send_client": false, 00:18:27.185 "zerocopy_threshold": 0, 00:18:27.185 "tls_version": 0, 00:18:27.185 "enable_ktls": false 00:18:27.185 } 00:18:27.185 }, 00:18:27.185 { 00:18:27.185 "method": "sock_impl_set_options", 00:18:27.185 "params": { 00:18:27.185 "impl_name": "posix", 00:18:27.185 "recv_buf_size": 2097152, 00:18:27.185 "send_buf_size": 2097152, 00:18:27.185 "enable_recv_pipe": true, 00:18:27.185 "enable_quickack": false, 00:18:27.185 "enable_placement_id": 0, 00:18:27.185 "enable_zerocopy_send_server": true, 00:18:27.185 "enable_zerocopy_send_client": false, 00:18:27.185 "zerocopy_threshold": 0, 00:18:27.185 "tls_version": 0, 00:18:27.185 "enable_ktls": false 00:18:27.185 } 00:18:27.185 } 00:18:27.185 ] 00:18:27.185 }, 00:18:27.185 { 00:18:27.185 "subsystem": "vmd", 00:18:27.185 "config": [] 00:18:27.185 }, 00:18:27.185 { 00:18:27.185 "subsystem": "accel", 00:18:27.185 "config": [ 00:18:27.185 { 00:18:27.185 "method": "accel_set_options", 00:18:27.185 "params": { 00:18:27.185 "small_cache_size": 128, 00:18:27.185 "large_cache_size": 16, 00:18:27.185 "task_count": 2048, 00:18:27.185 "sequence_count": 2048, 00:18:27.185 "buf_count": 2048 00:18:27.185 } 00:18:27.185 } 00:18:27.185 ] 00:18:27.185 }, 00:18:27.185 { 00:18:27.185 "subsystem": "bdev", 00:18:27.185 "config": [ 00:18:27.185 { 00:18:27.185 "method": "bdev_set_options", 00:18:27.185 "params": { 00:18:27.185 "bdev_io_pool_size": 65535, 00:18:27.185 "bdev_io_cache_size": 256, 00:18:27.185 "bdev_auto_examine": true, 00:18:27.185 "iobuf_small_cache_size": 128, 00:18:27.185 "iobuf_large_cache_size": 16 00:18:27.185 } 00:18:27.185 }, 00:18:27.185 { 00:18:27.185 "method": "bdev_raid_set_options", 00:18:27.185 "params": { 00:18:27.185 "process_window_size_kb": 1024, 00:18:27.185 "process_max_bandwidth_mb_sec": 0 00:18:27.185 } 00:18:27.185 }, 00:18:27.185 { 00:18:27.185 "method": "bdev_iscsi_set_options", 00:18:27.185 "params": { 00:18:27.185 "timeout_sec": 30 00:18:27.185 } 00:18:27.185 }, 00:18:27.185 { 00:18:27.185 "method": "bdev_nvme_set_options", 00:18:27.185 "params": { 00:18:27.185 "action_on_timeout": "none", 00:18:27.185 "timeout_us": 0, 00:18:27.185 "timeout_admin_us": 0, 00:18:27.185 "keep_alive_timeout_ms": 10000, 00:18:27.185 "arbitration_burst": 0, 00:18:27.185 "low_priority_weight": 0, 00:18:27.185 "medium_priority_weight": 0, 00:18:27.185 "high_priority_weight": 0, 00:18:27.185 "nvme_adminq_poll_period_us": 10000, 00:18:27.185 "nvme_ioq_poll_period_us": 0, 00:18:27.185 "io_queue_requests": 512, 00:18:27.185 "delay_cmd_submit": true, 00:18:27.185 "transport_retry_count": 4, 00:18:27.185 "bdev_retry_count": 3, 00:18:27.185 "transport_ack_timeout": 0, 00:18:27.185 "ctrlr_loss_timeout_sec": 0, 00:18:27.185 "reconnect_delay_sec": 0, 00:18:27.185 "fast_io_fail_timeout_sec": 0, 00:18:27.185 "disable_auto_failback": false, 00:18:27.185 "generate_uuids": false, 00:18:27.185 "transport_tos": 0, 00:18:27.185 "nvme_error_stat": false, 00:18:27.185 "rdma_srq_size": 0, 00:18:27.185 "io_path_stat": false, 00:18:27.185 "allow_accel_sequence": false, 00:18:27.185 "rdma_max_cq_size": 0, 00:18:27.186 "rdma_cm_event_timeout_ms": 0, 00:18:27.186 "dhchap_digests": [ 00:18:27.186 "sha256", 00:18:27.186 "sha384", 00:18:27.186 "sha512" 00:18:27.186 ], 00:18:27.186 "dhchap_dhgroups": [ 00:18:27.186 "null", 00:18:27.186 "ffdhe2048", 00:18:27.186 "ffdhe3072", 00:18:27.186 "ffdhe4096", 00:18:27.186 "ffdhe6144", 00:18:27.186 "ffdhe8192" 00:18:27.186 ] 00:18:27.186 } 00:18:27.186 }, 00:18:27.186 { 00:18:27.186 "method": "bdev_nvme_attach_controller", 00:18:27.186 "params": { 00:18:27.186 "name": "TLSTEST", 00:18:27.186 "trtype": "TCP", 00:18:27.186 "adrfam": "IPv4", 00:18:27.186 "traddr": "10.0.0.2", 00:18:27.186 "trsvcid": "4420", 00:18:27.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.186 "prchk_reftag": false, 00:18:27.186 "prchk_guard": false, 00:18:27.186 "ctrlr_loss_timeout_sec": 0, 00:18:27.186 "reconnect_delay_sec": 0, 00:18:27.186 "fast_io_fail_timeout_sec": 0, 00:18:27.186 "psk": "/tmp/tmp.T8tnZHv9yR", 00:18:27.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.186 "hdgst": false, 00:18:27.186 "ddgst": false 00:18:27.186 } 00:18:27.186 }, 00:18:27.186 { 00:18:27.186 "method": "bdev_nvme_set_hotplug", 00:18:27.186 "params": { 00:18:27.186 "period_us": 100000, 00:18:27.186 "enable": false 00:18:27.186 } 00:18:27.186 }, 00:18:27.186 { 00:18:27.186 "method": "bdev_wait_for_examine" 00:18:27.186 } 00:18:27.186 ] 00:18:27.186 }, 00:18:27.186 { 00:18:27.186 "subsystem": "nbd", 00:18:27.186 "config": [] 00:18:27.186 } 00:18:27.186 ] 00:18:27.186 }' 00:18:27.186 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.186 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:27.186 23:55:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.186 [2024-07-24 23:55:57.738392] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:27.186 [2024-07-24 23:55:57.738465] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3400735 ] 00:18:27.186 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.186 [2024-07-24 23:55:57.795327] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.444 [2024-07-24 23:55:57.914703] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.702 [2024-07-24 23:55:58.075429] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:27.702 [2024-07-24 23:55:58.075568] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:28.268 23:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.268 23:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:28.268 23:55:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:28.268 Running I/O for 10 seconds... 00:18:40.457 00:18:40.457 Latency(us) 00:18:40.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.457 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:40.457 Verification LBA range: start 0x0 length 0x2000 00:18:40.457 TLSTESTn1 : 10.03 2845.65 11.12 0.00 0.00 44894.50 5898.24 69516.71 00:18:40.457 =================================================================================================================== 00:18:40.457 Total : 2845.65 11.12 0.00 0.00 44894.50 5898.24 69516.71 00:18:40.457 0 00:18:40.457 23:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:40.457 23:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 3400735 00:18:40.457 23:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3400735 ']' 00:18:40.457 23:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3400735 00:18:40.457 23:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:40.457 23:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.457 23:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3400735 00:18:40.457 23:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:18:40.457 23:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:18:40.457 23:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3400735' 00:18:40.457 killing process with pid 3400735 00:18:40.457 23:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3400735 00:18:40.457 Received shutdown signal, test time was about 10.000000 seconds 00:18:40.457 00:18:40.457 Latency(us) 00:18:40.457 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.457 =================================================================================================================== 00:18:40.457 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:40.457 [2024-07-24 23:56:08.913865] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:40.457 23:56:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3400735 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 3400584 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3400584 ']' 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3400584 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3400584 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3400584' 00:18:40.457 killing process with pid 3400584 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3400584 00:18:40.457 [2024-07-24 23:56:09.191391] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3400584 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3402176 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3402176 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3402176 ']' 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:40.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.457 [2024-07-24 23:56:09.509773] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:40.457 [2024-07-24 23:56:09.509849] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:40.457 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.457 [2024-07-24 23:56:09.579665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.457 [2024-07-24 23:56:09.696499] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:40.457 [2024-07-24 23:56:09.696576] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:40.457 [2024-07-24 23:56:09.696593] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:40.457 [2024-07-24 23:56:09.696607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:40.457 [2024-07-24 23:56:09.696619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:40.457 [2024-07-24 23:56:09.696653] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.T8tnZHv9yR 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.T8tnZHv9yR 00:18:40.457 23:56:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:40.457 [2024-07-24 23:56:10.119011] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:40.458 23:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:40.458 23:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:40.458 [2024-07-24 23:56:10.620336] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:40.458 [2024-07-24 23:56:10.620620] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:40.458 23:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:40.458 malloc0 00:18:40.458 23:56:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:40.715 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.T8tnZHv9yR 00:18:40.973 [2024-07-24 23:56:11.454367] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:40.973 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=3402346 00:18:40.973 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:40.973 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:40.973 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 3402346 /var/tmp/bdevperf.sock 00:18:40.973 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3402346 ']' 00:18:40.973 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:40.973 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:40.973 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:40.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:40.973 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:40.973 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:40.973 [2024-07-24 23:56:11.518458] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:40.973 [2024-07-24 23:56:11.518549] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402346 ] 00:18:40.973 EAL: No free 2048 kB hugepages reported on node 1 00:18:40.973 [2024-07-24 23:56:11.582708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.231 [2024-07-24 23:56:11.700852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:41.231 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:41.231 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:41.231 23:56:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.T8tnZHv9yR 00:18:41.795 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:41.795 [2024-07-24 23:56:12.378990] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:42.052 nvme0n1 00:18:42.052 23:56:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:42.052 Running I/O for 1 seconds... 00:18:43.422 00:18:43.422 Latency(us) 00:18:43.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.422 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:43.422 Verification LBA range: start 0x0 length 0x2000 00:18:43.422 nvme0n1 : 1.03 3399.29 13.28 0.00 0.00 37050.20 6165.24 51652.08 00:18:43.422 =================================================================================================================== 00:18:43.422 Total : 3399.29 13.28 0.00 0.00 37050.20 6165.24 51652.08 00:18:43.422 0 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 3402346 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3402346 ']' 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3402346 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3402346 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3402346' 00:18:43.422 killing process with pid 3402346 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3402346 00:18:43.422 Received shutdown signal, test time was about 1.000000 seconds 00:18:43.422 00:18:43.422 Latency(us) 00:18:43.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.422 =================================================================================================================== 00:18:43.422 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3402346 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 3402176 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3402176 ']' 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3402176 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3402176 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3402176' 00:18:43.422 killing process with pid 3402176 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3402176 00:18:43.422 [2024-07-24 23:56:13.956980] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:43.422 23:56:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3402176 00:18:43.680 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:43.680 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:43.680 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:43.680 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.680 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3402747 00:18:43.680 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:43.680 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3402747 00:18:43.680 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3402747 ']' 00:18:43.680 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.680 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:43.680 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.680 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:43.680 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.937 [2024-07-24 23:56:14.298700] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:43.937 [2024-07-24 23:56:14.298771] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.937 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.937 [2024-07-24 23:56:14.361739] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.937 [2024-07-24 23:56:14.475302] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.937 [2024-07-24 23:56:14.475358] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.937 [2024-07-24 23:56:14.475388] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.937 [2024-07-24 23:56:14.475409] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.937 [2024-07-24 23:56:14.475419] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.937 [2024-07-24 23:56:14.475449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.194 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.194 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:44.194 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:44.194 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:44.194 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.195 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.195 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:18:44.195 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.195 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.195 [2024-07-24 23:56:14.627270] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:44.195 malloc0 00:18:44.195 [2024-07-24 23:56:14.659293] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:44.195 [2024-07-24 23:56:14.668413] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:44.195 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.195 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=3402771 00:18:44.195 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 3402771 /var/tmp/bdevperf.sock 00:18:44.195 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3402771 ']' 00:18:44.195 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:44.195 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:44.195 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:44.195 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:44.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:44.195 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:44.195 23:56:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.195 [2024-07-24 23:56:14.737733] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:44.195 [2024-07-24 23:56:14.737806] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3402771 ] 00:18:44.195 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.195 [2024-07-24 23:56:14.799626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.453 [2024-07-24 23:56:14.917783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.453 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.453 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:44.453 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.T8tnZHv9yR 00:18:44.710 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:44.969 [2024-07-24 23:56:15.496996] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:44.969 nvme0n1 00:18:45.226 23:56:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:45.226 Running I/O for 1 seconds... 00:18:46.178 00:18:46.178 Latency(us) 00:18:46.178 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.178 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:46.178 Verification LBA range: start 0x0 length 0x2000 00:18:46.178 nvme0n1 : 1.05 2663.52 10.40 0.00 0.00 47052.88 6650.69 52040.44 00:18:46.178 =================================================================================================================== 00:18:46.178 Total : 2663.52 10.40 0.00 0.00 47052.88 6650.69 52040.44 00:18:46.178 0 00:18:46.178 23:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:18:46.178 23:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.178 23:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.435 23:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.435 23:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:18:46.435 "subsystems": [ 00:18:46.435 { 00:18:46.435 "subsystem": "keyring", 00:18:46.435 "config": [ 00:18:46.435 { 00:18:46.435 "method": "keyring_file_add_key", 00:18:46.435 "params": { 00:18:46.435 "name": "key0", 00:18:46.435 "path": "/tmp/tmp.T8tnZHv9yR" 00:18:46.435 } 00:18:46.435 } 00:18:46.435 ] 00:18:46.435 }, 00:18:46.435 { 00:18:46.435 "subsystem": "iobuf", 00:18:46.435 "config": [ 00:18:46.435 { 00:18:46.435 "method": "iobuf_set_options", 00:18:46.435 "params": { 00:18:46.435 "small_pool_count": 8192, 00:18:46.435 "large_pool_count": 1024, 00:18:46.435 "small_bufsize": 8192, 00:18:46.435 "large_bufsize": 135168 00:18:46.435 } 00:18:46.435 } 00:18:46.435 ] 00:18:46.435 }, 00:18:46.435 { 00:18:46.435 "subsystem": "sock", 00:18:46.435 "config": [ 00:18:46.435 { 00:18:46.435 "method": "sock_set_default_impl", 00:18:46.435 "params": { 00:18:46.435 "impl_name": "posix" 00:18:46.435 } 00:18:46.435 }, 00:18:46.435 { 00:18:46.435 "method": "sock_impl_set_options", 00:18:46.435 "params": { 00:18:46.435 "impl_name": "ssl", 00:18:46.435 "recv_buf_size": 4096, 00:18:46.435 "send_buf_size": 4096, 00:18:46.435 "enable_recv_pipe": true, 00:18:46.435 "enable_quickack": false, 00:18:46.435 "enable_placement_id": 0, 00:18:46.435 "enable_zerocopy_send_server": true, 00:18:46.435 "enable_zerocopy_send_client": false, 00:18:46.435 "zerocopy_threshold": 0, 00:18:46.435 "tls_version": 0, 00:18:46.435 "enable_ktls": false 00:18:46.435 } 00:18:46.435 }, 00:18:46.435 { 00:18:46.435 "method": "sock_impl_set_options", 00:18:46.435 "params": { 00:18:46.435 "impl_name": "posix", 00:18:46.435 "recv_buf_size": 2097152, 00:18:46.435 "send_buf_size": 2097152, 00:18:46.435 "enable_recv_pipe": true, 00:18:46.435 "enable_quickack": false, 00:18:46.435 "enable_placement_id": 0, 00:18:46.435 "enable_zerocopy_send_server": true, 00:18:46.436 "enable_zerocopy_send_client": false, 00:18:46.436 "zerocopy_threshold": 0, 00:18:46.436 "tls_version": 0, 00:18:46.436 "enable_ktls": false 00:18:46.436 } 00:18:46.436 } 00:18:46.436 ] 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "subsystem": "vmd", 00:18:46.436 "config": [] 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "subsystem": "accel", 00:18:46.436 "config": [ 00:18:46.436 { 00:18:46.436 "method": "accel_set_options", 00:18:46.436 "params": { 00:18:46.436 "small_cache_size": 128, 00:18:46.436 "large_cache_size": 16, 00:18:46.436 "task_count": 2048, 00:18:46.436 "sequence_count": 2048, 00:18:46.436 "buf_count": 2048 00:18:46.436 } 00:18:46.436 } 00:18:46.436 ] 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "subsystem": "bdev", 00:18:46.436 "config": [ 00:18:46.436 { 00:18:46.436 "method": "bdev_set_options", 00:18:46.436 "params": { 00:18:46.436 "bdev_io_pool_size": 65535, 00:18:46.436 "bdev_io_cache_size": 256, 00:18:46.436 "bdev_auto_examine": true, 00:18:46.436 "iobuf_small_cache_size": 128, 00:18:46.436 "iobuf_large_cache_size": 16 00:18:46.436 } 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "method": "bdev_raid_set_options", 00:18:46.436 "params": { 00:18:46.436 "process_window_size_kb": 1024, 00:18:46.436 "process_max_bandwidth_mb_sec": 0 00:18:46.436 } 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "method": "bdev_iscsi_set_options", 00:18:46.436 "params": { 00:18:46.436 "timeout_sec": 30 00:18:46.436 } 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "method": "bdev_nvme_set_options", 00:18:46.436 "params": { 00:18:46.436 "action_on_timeout": "none", 00:18:46.436 "timeout_us": 0, 00:18:46.436 "timeout_admin_us": 0, 00:18:46.436 "keep_alive_timeout_ms": 10000, 00:18:46.436 "arbitration_burst": 0, 00:18:46.436 "low_priority_weight": 0, 00:18:46.436 "medium_priority_weight": 0, 00:18:46.436 "high_priority_weight": 0, 00:18:46.436 "nvme_adminq_poll_period_us": 10000, 00:18:46.436 "nvme_ioq_poll_period_us": 0, 00:18:46.436 "io_queue_requests": 0, 00:18:46.436 "delay_cmd_submit": true, 00:18:46.436 "transport_retry_count": 4, 00:18:46.436 "bdev_retry_count": 3, 00:18:46.436 "transport_ack_timeout": 0, 00:18:46.436 "ctrlr_loss_timeout_sec": 0, 00:18:46.436 "reconnect_delay_sec": 0, 00:18:46.436 "fast_io_fail_timeout_sec": 0, 00:18:46.436 "disable_auto_failback": false, 00:18:46.436 "generate_uuids": false, 00:18:46.436 "transport_tos": 0, 00:18:46.436 "nvme_error_stat": false, 00:18:46.436 "rdma_srq_size": 0, 00:18:46.436 "io_path_stat": false, 00:18:46.436 "allow_accel_sequence": false, 00:18:46.436 "rdma_max_cq_size": 0, 00:18:46.436 "rdma_cm_event_timeout_ms": 0, 00:18:46.436 "dhchap_digests": [ 00:18:46.436 "sha256", 00:18:46.436 "sha384", 00:18:46.436 "sha512" 00:18:46.436 ], 00:18:46.436 "dhchap_dhgroups": [ 00:18:46.436 "null", 00:18:46.436 "ffdhe2048", 00:18:46.436 "ffdhe3072", 00:18:46.436 "ffdhe4096", 00:18:46.436 "ffdhe6144", 00:18:46.436 "ffdhe8192" 00:18:46.436 ] 00:18:46.436 } 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "method": "bdev_nvme_set_hotplug", 00:18:46.436 "params": { 00:18:46.436 "period_us": 100000, 00:18:46.436 "enable": false 00:18:46.436 } 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "method": "bdev_malloc_create", 00:18:46.436 "params": { 00:18:46.436 "name": "malloc0", 00:18:46.436 "num_blocks": 8192, 00:18:46.436 "block_size": 4096, 00:18:46.436 "physical_block_size": 4096, 00:18:46.436 "uuid": "4dfc6941-d3c6-4907-abe7-79c589c942b8", 00:18:46.436 "optimal_io_boundary": 0, 00:18:46.436 "md_size": 0, 00:18:46.436 "dif_type": 0, 00:18:46.436 "dif_is_head_of_md": false, 00:18:46.436 "dif_pi_format": 0 00:18:46.436 } 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "method": "bdev_wait_for_examine" 00:18:46.436 } 00:18:46.436 ] 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "subsystem": "nbd", 00:18:46.436 "config": [] 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "subsystem": "scheduler", 00:18:46.436 "config": [ 00:18:46.436 { 00:18:46.436 "method": "framework_set_scheduler", 00:18:46.436 "params": { 00:18:46.436 "name": "static" 00:18:46.436 } 00:18:46.436 } 00:18:46.436 ] 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "subsystem": "nvmf", 00:18:46.436 "config": [ 00:18:46.436 { 00:18:46.436 "method": "nvmf_set_config", 00:18:46.436 "params": { 00:18:46.436 "discovery_filter": "match_any", 00:18:46.436 "admin_cmd_passthru": { 00:18:46.436 "identify_ctrlr": false 00:18:46.436 } 00:18:46.436 } 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "method": "nvmf_set_max_subsystems", 00:18:46.436 "params": { 00:18:46.436 "max_subsystems": 1024 00:18:46.436 } 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "method": "nvmf_set_crdt", 00:18:46.436 "params": { 00:18:46.436 "crdt1": 0, 00:18:46.436 "crdt2": 0, 00:18:46.436 "crdt3": 0 00:18:46.436 } 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "method": "nvmf_create_transport", 00:18:46.436 "params": { 00:18:46.436 "trtype": "TCP", 00:18:46.436 "max_queue_depth": 128, 00:18:46.436 "max_io_qpairs_per_ctrlr": 127, 00:18:46.436 "in_capsule_data_size": 4096, 00:18:46.436 "max_io_size": 131072, 00:18:46.436 "io_unit_size": 131072, 00:18:46.436 "max_aq_depth": 128, 00:18:46.436 "num_shared_buffers": 511, 00:18:46.436 "buf_cache_size": 4294967295, 00:18:46.436 "dif_insert_or_strip": false, 00:18:46.436 "zcopy": false, 00:18:46.436 "c2h_success": false, 00:18:46.436 "sock_priority": 0, 00:18:46.436 "abort_timeout_sec": 1, 00:18:46.436 "ack_timeout": 0, 00:18:46.436 "data_wr_pool_size": 0 00:18:46.436 } 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "method": "nvmf_create_subsystem", 00:18:46.436 "params": { 00:18:46.436 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.436 "allow_any_host": false, 00:18:46.436 "serial_number": "00000000000000000000", 00:18:46.436 "model_number": "SPDK bdev Controller", 00:18:46.436 "max_namespaces": 32, 00:18:46.436 "min_cntlid": 1, 00:18:46.436 "max_cntlid": 65519, 00:18:46.436 "ana_reporting": false 00:18:46.436 } 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "method": "nvmf_subsystem_add_host", 00:18:46.436 "params": { 00:18:46.436 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.436 "host": "nqn.2016-06.io.spdk:host1", 00:18:46.436 "psk": "key0" 00:18:46.436 } 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "method": "nvmf_subsystem_add_ns", 00:18:46.436 "params": { 00:18:46.436 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.436 "namespace": { 00:18:46.436 "nsid": 1, 00:18:46.436 "bdev_name": "malloc0", 00:18:46.436 "nguid": "4DFC6941D3C64907ABE779C589C942B8", 00:18:46.436 "uuid": "4dfc6941-d3c6-4907-abe7-79c589c942b8", 00:18:46.436 "no_auto_visible": false 00:18:46.436 } 00:18:46.436 } 00:18:46.436 }, 00:18:46.436 { 00:18:46.436 "method": "nvmf_subsystem_add_listener", 00:18:46.436 "params": { 00:18:46.436 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.436 "listen_address": { 00:18:46.436 "trtype": "TCP", 00:18:46.436 "adrfam": "IPv4", 00:18:46.436 "traddr": "10.0.0.2", 00:18:46.436 "trsvcid": "4420" 00:18:46.436 }, 00:18:46.436 "secure_channel": false, 00:18:46.436 "sock_impl": "ssl" 00:18:46.436 } 00:18:46.436 } 00:18:46.436 ] 00:18:46.436 } 00:18:46.436 ] 00:18:46.436 }' 00:18:46.436 23:56:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:46.694 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:18:46.694 "subsystems": [ 00:18:46.694 { 00:18:46.694 "subsystem": "keyring", 00:18:46.694 "config": [ 00:18:46.694 { 00:18:46.694 "method": "keyring_file_add_key", 00:18:46.694 "params": { 00:18:46.694 "name": "key0", 00:18:46.694 "path": "/tmp/tmp.T8tnZHv9yR" 00:18:46.694 } 00:18:46.694 } 00:18:46.694 ] 00:18:46.694 }, 00:18:46.694 { 00:18:46.694 "subsystem": "iobuf", 00:18:46.694 "config": [ 00:18:46.694 { 00:18:46.694 "method": "iobuf_set_options", 00:18:46.694 "params": { 00:18:46.694 "small_pool_count": 8192, 00:18:46.694 "large_pool_count": 1024, 00:18:46.694 "small_bufsize": 8192, 00:18:46.694 "large_bufsize": 135168 00:18:46.694 } 00:18:46.694 } 00:18:46.694 ] 00:18:46.694 }, 00:18:46.694 { 00:18:46.694 "subsystem": "sock", 00:18:46.694 "config": [ 00:18:46.694 { 00:18:46.694 "method": "sock_set_default_impl", 00:18:46.694 "params": { 00:18:46.694 "impl_name": "posix" 00:18:46.694 } 00:18:46.694 }, 00:18:46.694 { 00:18:46.694 "method": "sock_impl_set_options", 00:18:46.694 "params": { 00:18:46.694 "impl_name": "ssl", 00:18:46.694 "recv_buf_size": 4096, 00:18:46.694 "send_buf_size": 4096, 00:18:46.694 "enable_recv_pipe": true, 00:18:46.694 "enable_quickack": false, 00:18:46.694 "enable_placement_id": 0, 00:18:46.694 "enable_zerocopy_send_server": true, 00:18:46.694 "enable_zerocopy_send_client": false, 00:18:46.694 "zerocopy_threshold": 0, 00:18:46.694 "tls_version": 0, 00:18:46.694 "enable_ktls": false 00:18:46.694 } 00:18:46.694 }, 00:18:46.694 { 00:18:46.694 "method": "sock_impl_set_options", 00:18:46.694 "params": { 00:18:46.694 "impl_name": "posix", 00:18:46.694 "recv_buf_size": 2097152, 00:18:46.694 "send_buf_size": 2097152, 00:18:46.694 "enable_recv_pipe": true, 00:18:46.694 "enable_quickack": false, 00:18:46.694 "enable_placement_id": 0, 00:18:46.694 "enable_zerocopy_send_server": true, 00:18:46.694 "enable_zerocopy_send_client": false, 00:18:46.694 "zerocopy_threshold": 0, 00:18:46.694 "tls_version": 0, 00:18:46.694 "enable_ktls": false 00:18:46.694 } 00:18:46.694 } 00:18:46.694 ] 00:18:46.694 }, 00:18:46.694 { 00:18:46.694 "subsystem": "vmd", 00:18:46.694 "config": [] 00:18:46.695 }, 00:18:46.695 { 00:18:46.695 "subsystem": "accel", 00:18:46.695 "config": [ 00:18:46.695 { 00:18:46.695 "method": "accel_set_options", 00:18:46.695 "params": { 00:18:46.695 "small_cache_size": 128, 00:18:46.695 "large_cache_size": 16, 00:18:46.695 "task_count": 2048, 00:18:46.695 "sequence_count": 2048, 00:18:46.695 "buf_count": 2048 00:18:46.695 } 00:18:46.695 } 00:18:46.695 ] 00:18:46.695 }, 00:18:46.695 { 00:18:46.695 "subsystem": "bdev", 00:18:46.695 "config": [ 00:18:46.695 { 00:18:46.695 "method": "bdev_set_options", 00:18:46.695 "params": { 00:18:46.695 "bdev_io_pool_size": 65535, 00:18:46.695 "bdev_io_cache_size": 256, 00:18:46.695 "bdev_auto_examine": true, 00:18:46.695 "iobuf_small_cache_size": 128, 00:18:46.695 "iobuf_large_cache_size": 16 00:18:46.695 } 00:18:46.695 }, 00:18:46.695 { 00:18:46.695 "method": "bdev_raid_set_options", 00:18:46.695 "params": { 00:18:46.695 "process_window_size_kb": 1024, 00:18:46.695 "process_max_bandwidth_mb_sec": 0 00:18:46.695 } 00:18:46.695 }, 00:18:46.695 { 00:18:46.695 "method": "bdev_iscsi_set_options", 00:18:46.695 "params": { 00:18:46.695 "timeout_sec": 30 00:18:46.695 } 00:18:46.695 }, 00:18:46.695 { 00:18:46.695 "method": "bdev_nvme_set_options", 00:18:46.695 "params": { 00:18:46.695 "action_on_timeout": "none", 00:18:46.695 "timeout_us": 0, 00:18:46.695 "timeout_admin_us": 0, 00:18:46.695 "keep_alive_timeout_ms": 10000, 00:18:46.695 "arbitration_burst": 0, 00:18:46.695 "low_priority_weight": 0, 00:18:46.695 "medium_priority_weight": 0, 00:18:46.695 "high_priority_weight": 0, 00:18:46.695 "nvme_adminq_poll_period_us": 10000, 00:18:46.695 "nvme_ioq_poll_period_us": 0, 00:18:46.695 "io_queue_requests": 512, 00:18:46.695 "delay_cmd_submit": true, 00:18:46.695 "transport_retry_count": 4, 00:18:46.695 "bdev_retry_count": 3, 00:18:46.695 "transport_ack_timeout": 0, 00:18:46.695 "ctrlr_loss_timeout_sec": 0, 00:18:46.695 "reconnect_delay_sec": 0, 00:18:46.695 "fast_io_fail_timeout_sec": 0, 00:18:46.695 "disable_auto_failback": false, 00:18:46.695 "generate_uuids": false, 00:18:46.695 "transport_tos": 0, 00:18:46.695 "nvme_error_stat": false, 00:18:46.695 "rdma_srq_size": 0, 00:18:46.695 "io_path_stat": false, 00:18:46.695 "allow_accel_sequence": false, 00:18:46.695 "rdma_max_cq_size": 0, 00:18:46.695 "rdma_cm_event_timeout_ms": 0, 00:18:46.695 "dhchap_digests": [ 00:18:46.695 "sha256", 00:18:46.695 "sha384", 00:18:46.695 "sha512" 00:18:46.695 ], 00:18:46.695 "dhchap_dhgroups": [ 00:18:46.695 "null", 00:18:46.695 "ffdhe2048", 00:18:46.695 "ffdhe3072", 00:18:46.695 "ffdhe4096", 00:18:46.695 "ffdhe6144", 00:18:46.695 "ffdhe8192" 00:18:46.695 ] 00:18:46.695 } 00:18:46.695 }, 00:18:46.695 { 00:18:46.695 "method": "bdev_nvme_attach_controller", 00:18:46.695 "params": { 00:18:46.695 "name": "nvme0", 00:18:46.695 "trtype": "TCP", 00:18:46.695 "adrfam": "IPv4", 00:18:46.695 "traddr": "10.0.0.2", 00:18:46.695 "trsvcid": "4420", 00:18:46.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:46.695 "prchk_reftag": false, 00:18:46.695 "prchk_guard": false, 00:18:46.695 "ctrlr_loss_timeout_sec": 0, 00:18:46.695 "reconnect_delay_sec": 0, 00:18:46.695 "fast_io_fail_timeout_sec": 0, 00:18:46.695 "psk": "key0", 00:18:46.695 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:46.695 "hdgst": false, 00:18:46.695 "ddgst": false 00:18:46.695 } 00:18:46.695 }, 00:18:46.695 { 00:18:46.695 "method": "bdev_nvme_set_hotplug", 00:18:46.695 "params": { 00:18:46.695 "period_us": 100000, 00:18:46.695 "enable": false 00:18:46.695 } 00:18:46.695 }, 00:18:46.695 { 00:18:46.695 "method": "bdev_enable_histogram", 00:18:46.695 "params": { 00:18:46.695 "name": "nvme0n1", 00:18:46.695 "enable": true 00:18:46.695 } 00:18:46.695 }, 00:18:46.695 { 00:18:46.695 "method": "bdev_wait_for_examine" 00:18:46.695 } 00:18:46.695 ] 00:18:46.695 }, 00:18:46.695 { 00:18:46.695 "subsystem": "nbd", 00:18:46.695 "config": [] 00:18:46.695 } 00:18:46.695 ] 00:18:46.695 }' 00:18:46.695 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 3402771 00:18:46.695 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3402771 ']' 00:18:46.695 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3402771 00:18:46.695 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:46.695 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.695 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3402771 00:18:46.695 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:46.695 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:46.695 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3402771' 00:18:46.695 killing process with pid 3402771 00:18:46.695 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3402771 00:18:46.695 Received shutdown signal, test time was about 1.000000 seconds 00:18:46.695 00:18:46.695 Latency(us) 00:18:46.695 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.695 =================================================================================================================== 00:18:46.695 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.695 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3402771 00:18:46.953 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 3402747 00:18:46.953 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3402747 ']' 00:18:46.953 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3402747 00:18:46.953 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:46.953 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:46.953 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3402747 00:18:46.953 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:46.953 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:46.953 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3402747' 00:18:46.953 killing process with pid 3402747 00:18:46.953 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3402747 00:18:46.953 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3402747 00:18:47.519 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:18:47.519 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.519 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:18:47.519 "subsystems": [ 00:18:47.519 { 00:18:47.519 "subsystem": "keyring", 00:18:47.519 "config": [ 00:18:47.519 { 00:18:47.519 "method": "keyring_file_add_key", 00:18:47.519 "params": { 00:18:47.519 "name": "key0", 00:18:47.519 "path": "/tmp/tmp.T8tnZHv9yR" 00:18:47.519 } 00:18:47.519 } 00:18:47.519 ] 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "subsystem": "iobuf", 00:18:47.519 "config": [ 00:18:47.519 { 00:18:47.519 "method": "iobuf_set_options", 00:18:47.519 "params": { 00:18:47.519 "small_pool_count": 8192, 00:18:47.519 "large_pool_count": 1024, 00:18:47.519 "small_bufsize": 8192, 00:18:47.519 "large_bufsize": 135168 00:18:47.519 } 00:18:47.519 } 00:18:47.519 ] 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "subsystem": "sock", 00:18:47.519 "config": [ 00:18:47.519 { 00:18:47.519 "method": "sock_set_default_impl", 00:18:47.519 "params": { 00:18:47.519 "impl_name": "posix" 00:18:47.519 } 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "method": "sock_impl_set_options", 00:18:47.519 "params": { 00:18:47.519 "impl_name": "ssl", 00:18:47.519 "recv_buf_size": 4096, 00:18:47.519 "send_buf_size": 4096, 00:18:47.519 "enable_recv_pipe": true, 00:18:47.519 "enable_quickack": false, 00:18:47.519 "enable_placement_id": 0, 00:18:47.519 "enable_zerocopy_send_server": true, 00:18:47.519 "enable_zerocopy_send_client": false, 00:18:47.519 "zerocopy_threshold": 0, 00:18:47.519 "tls_version": 0, 00:18:47.519 "enable_ktls": false 00:18:47.519 } 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "method": "sock_impl_set_options", 00:18:47.519 "params": { 00:18:47.519 "impl_name": "posix", 00:18:47.519 "recv_buf_size": 2097152, 00:18:47.519 "send_buf_size": 2097152, 00:18:47.519 "enable_recv_pipe": true, 00:18:47.519 "enable_quickack": false, 00:18:47.519 "enable_placement_id": 0, 00:18:47.519 "enable_zerocopy_send_server": true, 00:18:47.519 "enable_zerocopy_send_client": false, 00:18:47.519 "zerocopy_threshold": 0, 00:18:47.519 "tls_version": 0, 00:18:47.519 "enable_ktls": false 00:18:47.519 } 00:18:47.519 } 00:18:47.519 ] 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "subsystem": "vmd", 00:18:47.519 "config": [] 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "subsystem": "accel", 00:18:47.519 "config": [ 00:18:47.519 { 00:18:47.519 "method": "accel_set_options", 00:18:47.519 "params": { 00:18:47.519 "small_cache_size": 128, 00:18:47.519 "large_cache_size": 16, 00:18:47.519 "task_count": 2048, 00:18:47.519 "sequence_count": 2048, 00:18:47.519 "buf_count": 2048 00:18:47.519 } 00:18:47.519 } 00:18:47.519 ] 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "subsystem": "bdev", 00:18:47.519 "config": [ 00:18:47.519 { 00:18:47.519 "method": "bdev_set_options", 00:18:47.519 "params": { 00:18:47.519 "bdev_io_pool_size": 65535, 00:18:47.519 "bdev_io_cache_size": 256, 00:18:47.519 "bdev_auto_examine": true, 00:18:47.519 "iobuf_small_cache_size": 128, 00:18:47.519 "iobuf_large_cache_size": 16 00:18:47.519 } 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "method": "bdev_raid_set_options", 00:18:47.519 "params": { 00:18:47.519 "process_window_size_kb": 1024, 00:18:47.519 "process_max_bandwidth_mb_sec": 0 00:18:47.519 } 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "method": "bdev_iscsi_set_options", 00:18:47.519 "params": { 00:18:47.519 "timeout_sec": 30 00:18:47.519 } 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "method": "bdev_nvme_set_options", 00:18:47.519 "params": { 00:18:47.519 "action_on_timeout": "none", 00:18:47.519 "timeout_us": 0, 00:18:47.519 "timeout_admin_us": 0, 00:18:47.519 "keep_alive_timeout_ms": 10000, 00:18:47.519 "arbitration_burst": 0, 00:18:47.519 "low_priority_weight": 0, 00:18:47.519 "medium_priority_weight": 0, 00:18:47.519 "high_priority_weight": 0, 00:18:47.519 "nvme_adminq_poll_period_us": 10000, 00:18:47.519 "nvme_ioq_poll_period_us": 0, 00:18:47.519 "io_queue_requests": 0, 00:18:47.519 "delay_cmd_submit": true, 00:18:47.519 "transport_retry_count": 4, 00:18:47.519 "bdev_retry_count": 3, 00:18:47.519 "transport_ack_timeout": 0, 00:18:47.519 "ctrlr_loss_timeout_sec": 0, 00:18:47.519 "reconnect_delay_sec": 0, 00:18:47.519 "fast_io_fail_timeout_sec": 0, 00:18:47.519 "disable_auto_failback": false, 00:18:47.519 "generate_uuids": false, 00:18:47.519 "transport_tos": 0, 00:18:47.519 "nvme_error_stat": false, 00:18:47.519 "rdma_srq_size": 0, 00:18:47.519 "io_path_stat": false, 00:18:47.519 "allow_accel_sequence": false, 00:18:47.519 "rdma_max_cq_size": 0, 00:18:47.519 "rdma_cm_event_timeout_ms": 0, 00:18:47.519 "dhchap_digests": [ 00:18:47.519 "sha256", 00:18:47.519 "sha384", 00:18:47.519 "sha512" 00:18:47.519 ], 00:18:47.519 "dhchap_dhgroups": [ 00:18:47.519 "null", 00:18:47.519 "ffdhe2048", 00:18:47.519 "ffdhe3072", 00:18:47.519 "ffdhe4096", 00:18:47.519 "ffdhe6144", 00:18:47.519 "ffdhe8192" 00:18:47.519 ] 00:18:47.519 } 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "method": "bdev_nvme_set_hotplug", 00:18:47.519 "params": { 00:18:47.519 "period_us": 100000, 00:18:47.519 "enable": false 00:18:47.519 } 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "method": "bdev_malloc_create", 00:18:47.519 "params": { 00:18:47.519 "name": "malloc0", 00:18:47.519 "num_blocks": 8192, 00:18:47.519 "block_size": 4096, 00:18:47.519 "physical_block_size": 4096, 00:18:47.519 "uuid": "4dfc6941-d3c6-4907-abe7-79c589c942b8", 00:18:47.519 "optimal_io_boundary": 0, 00:18:47.519 "md_size": 0, 00:18:47.519 "dif_type": 0, 00:18:47.519 "dif_is_head_of_md": false, 00:18:47.519 "dif_pi_format": 0 00:18:47.519 } 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "method": "bdev_wait_for_examine" 00:18:47.519 } 00:18:47.519 ] 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "subsystem": "nbd", 00:18:47.519 "config": [] 00:18:47.519 }, 00:18:47.519 { 00:18:47.519 "subsystem": "scheduler", 00:18:47.519 "config": [ 00:18:47.519 { 00:18:47.519 "method": "framework_set_scheduler", 00:18:47.520 "params": { 00:18:47.520 "name": "static" 00:18:47.520 } 00:18:47.520 } 00:18:47.520 ] 00:18:47.520 }, 00:18:47.520 { 00:18:47.520 "subsystem": "nvmf", 00:18:47.520 "config": [ 00:18:47.520 { 00:18:47.520 "method": "nvmf_set_config", 00:18:47.520 "params": { 00:18:47.520 "discovery_filter": "match_any", 00:18:47.520 "admin_cmd_passthru": { 00:18:47.520 "identify_ctrlr": false 00:18:47.520 } 00:18:47.520 } 00:18:47.520 }, 00:18:47.520 { 00:18:47.520 "method": "nvmf_set_max_subsystems", 00:18:47.520 "params": { 00:18:47.520 "max_subsystems": 1024 00:18:47.520 } 00:18:47.520 }, 00:18:47.520 { 00:18:47.520 "method": "nvmf_set_crdt", 00:18:47.520 "params": { 00:18:47.520 "crdt1": 0, 00:18:47.520 "crdt2": 0, 00:18:47.520 "crdt3": 0 00:18:47.520 } 00:18:47.520 }, 00:18:47.520 { 00:18:47.520 "method": "nvmf_create_transport", 00:18:47.520 "params": { 00:18:47.520 "trtype": "TCP", 00:18:47.520 "max_queue_depth": 128, 00:18:47.520 "max_io_qpairs_per_ctrlr": 127, 00:18:47.520 "in_capsule_data_size": 4096, 00:18:47.520 "max_io_size": 131072, 00:18:47.520 "io_unit_size": 131072, 00:18:47.520 "max_aq_depth": 128, 00:18:47.520 "num_shared_buffers": 511, 00:18:47.520 "buf_cache_size": 4294967295, 00:18:47.520 "dif_insert_or_strip": false, 00:18:47.520 "zcopy": false, 00:18:47.520 "c2h_success": false, 00:18:47.520 "sock_priority": 0, 00:18:47.520 "abort_timeout_sec": 1, 00:18:47.520 "ack_timeout": 0, 00:18:47.520 "data_wr_pool_size": 0 00:18:47.520 } 00:18:47.520 }, 00:18:47.520 { 00:18:47.520 "method": "nvmf_create_subsystem", 00:18:47.520 "params": { 00:18:47.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.520 "allow_any_host": false, 00:18:47.520 "serial_number": "00000000000000000000", 00:18:47.520 "model_number": "SPDK bdev Controller", 00:18:47.520 "max_namespaces": 32, 00:18:47.520 "min_cntlid": 1, 00:18:47.520 "max_cntlid": 65519, 00:18:47.520 "ana_reporting": false 00:18:47.520 } 00:18:47.520 }, 00:18:47.520 { 00:18:47.520 "method": "nvmf_subsystem_add_host", 00:18:47.520 "params": { 00:18:47.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.520 "host": "nqn.2016-06.io.spdk:host1", 00:18:47.520 "psk": "key0" 00:18:47.520 } 00:18:47.520 }, 00:18:47.520 { 00:18:47.520 "method": "nvmf_subsystem_add_ns", 00:18:47.520 "params": { 00:18:47.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.520 "namespace": { 00:18:47.520 "nsid": 1, 00:18:47.520 "bdev_name": "malloc0", 00:18:47.520 "nguid": "4DFC6941D3C64907ABE779C589C942B8", 00:18:47.520 "uuid": "4dfc6941-d3c6-4907-abe7-79c589c942b8", 00:18:47.520 "no_auto_visible": false 00:18:47.520 } 00:18:47.520 } 00:18:47.520 }, 00:18:47.520 { 00:18:47.520 "method": "nvmf_subsystem_add_listener", 00:18:47.520 "params": { 00:18:47.520 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:47.520 "listen_address": { 00:18:47.520 "trtype": "TCP", 00:18:47.520 "adrfam": "IPv4", 00:18:47.520 "traddr": "10.0.0.2", 00:18:47.520 "trsvcid": "4420" 00:18:47.520 }, 00:18:47.520 "secure_channel": false, 00:18:47.520 "sock_impl": "ssl" 00:18:47.520 } 00:18:47.520 } 00:18:47.520 ] 00:18:47.520 } 00:18:47.520 ] 00:18:47.520 }' 00:18:47.520 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:47.520 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.520 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=3403180 00:18:47.520 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:47.520 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 3403180 00:18:47.520 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3403180 ']' 00:18:47.520 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.520 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.520 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.520 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.520 23:56:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:47.520 [2024-07-24 23:56:17.906663] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:47.520 [2024-07-24 23:56:17.906751] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.520 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.520 [2024-07-24 23:56:17.977033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.520 [2024-07-24 23:56:18.090772] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.520 [2024-07-24 23:56:18.090836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.520 [2024-07-24 23:56:18.090853] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.520 [2024-07-24 23:56:18.090866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.520 [2024-07-24 23:56:18.090879] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.520 [2024-07-24 23:56:18.090962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:47.778 [2024-07-24 23:56:18.331257] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:47.778 [2024-07-24 23:56:18.373990] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.778 [2024-07-24 23:56:18.374222] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.343 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.343 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:48.343 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.343 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:48.343 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.343 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.343 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=3403332 00:18:48.343 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 3403332 /var/tmp/bdevperf.sock 00:18:48.343 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 3403332 ']' 00:18:48.343 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:48.343 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:48.343 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:48.343 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:48.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:48.343 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:18:48.343 "subsystems": [ 00:18:48.343 { 00:18:48.343 "subsystem": "keyring", 00:18:48.343 "config": [ 00:18:48.343 { 00:18:48.343 "method": "keyring_file_add_key", 00:18:48.343 "params": { 00:18:48.343 "name": "key0", 00:18:48.343 "path": "/tmp/tmp.T8tnZHv9yR" 00:18:48.343 } 00:18:48.343 } 00:18:48.343 ] 00:18:48.343 }, 00:18:48.343 { 00:18:48.343 "subsystem": "iobuf", 00:18:48.343 "config": [ 00:18:48.343 { 00:18:48.343 "method": "iobuf_set_options", 00:18:48.343 "params": { 00:18:48.343 "small_pool_count": 8192, 00:18:48.343 "large_pool_count": 1024, 00:18:48.343 "small_bufsize": 8192, 00:18:48.343 "large_bufsize": 135168 00:18:48.343 } 00:18:48.343 } 00:18:48.343 ] 00:18:48.343 }, 00:18:48.343 { 00:18:48.343 "subsystem": "sock", 00:18:48.343 "config": [ 00:18:48.343 { 00:18:48.343 "method": "sock_set_default_impl", 00:18:48.343 "params": { 00:18:48.343 "impl_name": "posix" 00:18:48.343 } 00:18:48.343 }, 00:18:48.343 { 00:18:48.343 "method": "sock_impl_set_options", 00:18:48.343 "params": { 00:18:48.343 "impl_name": "ssl", 00:18:48.343 "recv_buf_size": 4096, 00:18:48.343 "send_buf_size": 4096, 00:18:48.343 "enable_recv_pipe": true, 00:18:48.343 "enable_quickack": false, 00:18:48.343 "enable_placement_id": 0, 00:18:48.343 "enable_zerocopy_send_server": true, 00:18:48.343 "enable_zerocopy_send_client": false, 00:18:48.343 "zerocopy_threshold": 0, 00:18:48.343 "tls_version": 0, 00:18:48.343 "enable_ktls": false 00:18:48.343 } 00:18:48.343 }, 00:18:48.343 { 00:18:48.343 "method": "sock_impl_set_options", 00:18:48.343 "params": { 00:18:48.343 "impl_name": "posix", 00:18:48.343 "recv_buf_size": 2097152, 00:18:48.343 "send_buf_size": 2097152, 00:18:48.343 "enable_recv_pipe": true, 00:18:48.343 "enable_quickack": false, 00:18:48.343 "enable_placement_id": 0, 00:18:48.343 "enable_zerocopy_send_server": true, 00:18:48.343 "enable_zerocopy_send_client": false, 00:18:48.343 "zerocopy_threshold": 0, 00:18:48.343 "tls_version": 0, 00:18:48.343 "enable_ktls": false 00:18:48.343 } 00:18:48.343 } 00:18:48.343 ] 00:18:48.343 }, 00:18:48.343 { 00:18:48.343 "subsystem": "vmd", 00:18:48.343 "config": [] 00:18:48.343 }, 00:18:48.343 { 00:18:48.343 "subsystem": "accel", 00:18:48.343 "config": [ 00:18:48.343 { 00:18:48.343 "method": "accel_set_options", 00:18:48.343 "params": { 00:18:48.343 "small_cache_size": 128, 00:18:48.343 "large_cache_size": 16, 00:18:48.343 "task_count": 2048, 00:18:48.343 "sequence_count": 2048, 00:18:48.343 "buf_count": 2048 00:18:48.343 } 00:18:48.343 } 00:18:48.343 ] 00:18:48.343 }, 00:18:48.343 { 00:18:48.343 "subsystem": "bdev", 00:18:48.343 "config": [ 00:18:48.343 { 00:18:48.343 "method": "bdev_set_options", 00:18:48.343 "params": { 00:18:48.343 "bdev_io_pool_size": 65535, 00:18:48.343 "bdev_io_cache_size": 256, 00:18:48.343 "bdev_auto_examine": true, 00:18:48.343 "iobuf_small_cache_size": 128, 00:18:48.343 "iobuf_large_cache_size": 16 00:18:48.343 } 00:18:48.343 }, 00:18:48.343 { 00:18:48.343 "method": "bdev_raid_set_options", 00:18:48.343 "params": { 00:18:48.343 "process_window_size_kb": 1024, 00:18:48.343 "process_max_bandwidth_mb_sec": 0 00:18:48.343 } 00:18:48.343 }, 00:18:48.343 { 00:18:48.343 "method": "bdev_iscsi_set_options", 00:18:48.343 "params": { 00:18:48.343 "timeout_sec": 30 00:18:48.343 } 00:18:48.343 }, 00:18:48.343 { 00:18:48.343 "method": "bdev_nvme_set_options", 00:18:48.343 "params": { 00:18:48.343 "action_on_timeout": "none", 00:18:48.343 "timeout_us": 0, 00:18:48.343 "timeout_admin_us": 0, 00:18:48.343 "keep_alive_timeout_ms": 10000, 00:18:48.343 "arbitration_burst": 0, 00:18:48.343 "low_priority_weight": 0, 00:18:48.343 "medium_priority_weight": 0, 00:18:48.343 "high_priority_weight": 0, 00:18:48.343 "nvme_adminq_poll_period_us": 10000, 00:18:48.343 "nvme_ioq_poll_period_us": 0, 00:18:48.343 "io_queue_requests": 512, 00:18:48.343 "delay_cmd_submit": true, 00:18:48.343 "transport_retry_count": 4, 00:18:48.343 "bdev_retry_count": 3, 00:18:48.343 "transport_ack_timeout": 0, 00:18:48.343 "ctrlr_loss_timeout_sec": 0, 00:18:48.343 "reconnect_delay_sec": 0, 00:18:48.343 "fast_io_fail_timeout_sec": 0, 00:18:48.343 "disable_auto_failback": false, 00:18:48.343 "generate_uuids": false, 00:18:48.343 "transport_tos": 0, 00:18:48.343 "nvme_error_stat": false, 00:18:48.343 "rdma_srq_size": 0, 00:18:48.343 "io_path_stat": false, 00:18:48.343 "allow_accel_sequence": false, 00:18:48.343 "rdma_max_cq_size": 0, 00:18:48.343 "rdma_cm_event_timeout_ms": 0, 00:18:48.343 "dhchap_digests": [ 00:18:48.343 "sha256", 00:18:48.343 "sha384", 00:18:48.343 "sha512" 00:18:48.343 ], 00:18:48.343 "dhchap_dhgroups": [ 00:18:48.343 "null", 00:18:48.343 "ffdhe2048", 00:18:48.343 "ffdhe3072", 00:18:48.343 "ffdhe4096", 00:18:48.343 "ffdhe6144", 00:18:48.343 "ffdhe8192" 00:18:48.343 ] 00:18:48.343 } 00:18:48.343 }, 00:18:48.343 { 00:18:48.343 "method": "bdev_nvme_attach_controller", 00:18:48.343 "params": { 00:18:48.343 "name": "nvme0", 00:18:48.343 "trtype": "TCP", 00:18:48.343 "adrfam": "IPv4", 00:18:48.343 "traddr": "10.0.0.2", 00:18:48.343 "trsvcid": "4420", 00:18:48.343 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.343 "prchk_reftag": false, 00:18:48.343 "prchk_guard": false, 00:18:48.343 "ctrlr_loss_timeout_sec": 0, 00:18:48.343 "reconnect_delay_sec": 0, 00:18:48.343 "fast_io_fail_timeout_sec": 0, 00:18:48.343 "psk": "key0", 00:18:48.343 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:48.343 "hdgst": false, 00:18:48.343 "ddgst": false 00:18:48.343 } 00:18:48.343 }, 00:18:48.344 { 00:18:48.344 "method": "bdev_nvme_set_hotplug", 00:18:48.344 "params": { 00:18:48.344 "period_us": 100000, 00:18:48.344 "enable": false 00:18:48.344 } 00:18:48.344 }, 00:18:48.344 { 00:18:48.344 "method": "bdev_enable_histogram", 00:18:48.344 "params": { 00:18:48.344 "name": "nvme0n1", 00:18:48.344 "enable": true 00:18:48.344 } 00:18:48.344 }, 00:18:48.344 { 00:18:48.344 "method": "bdev_wait_for_examine" 00:18:48.344 } 00:18:48.344 ] 00:18:48.344 }, 00:18:48.344 { 00:18:48.344 "subsystem": "nbd", 00:18:48.344 "config": [] 00:18:48.344 } 00:18:48.344 ] 00:18:48.344 }' 00:18:48.344 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:48.344 23:56:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:48.344 [2024-07-24 23:56:18.916497] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:48.344 [2024-07-24 23:56:18.916579] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3403332 ] 00:18:48.344 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.602 [2024-07-24 23:56:18.977410] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.602 [2024-07-24 23:56:19.092972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.860 [2024-07-24 23:56:19.278652] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:49.426 23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.426 23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:49.426 23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:49.426 23:56:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:18:49.683 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.683 23:56:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:49.683 Running I/O for 1 seconds... 00:18:50.638 00:18:50.639 Latency(us) 00:18:50.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.639 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:50.639 Verification LBA range: start 0x0 length 0x2000 00:18:50.639 nvme0n1 : 1.03 3406.71 13.31 0.00 0.00 37106.37 6505.05 47574.28 00:18:50.639 =================================================================================================================== 00:18:50.639 Total : 3406.71 13.31 0.00 0.00 37106.37 6505.05 47574.28 00:18:50.639 0 00:18:50.639 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:18:50.639 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:18:50.639 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:50.639 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:18:50.639 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:18:50.639 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:18:50.639 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:50.639 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:18:50.639 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:18:50.639 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:18:50.639 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:50.639 nvmf_trace.0 00:18:50.896 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:18:50.896 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3403332 00:18:50.896 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3403332 ']' 00:18:50.896 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3403332 00:18:50.896 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:50.896 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:50.896 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3403332 00:18:50.896 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:50.896 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:50.896 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3403332' 00:18:50.896 killing process with pid 3403332 00:18:50.896 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3403332 00:18:50.896 Received shutdown signal, test time was about 1.000000 seconds 00:18:50.896 00:18:50.896 Latency(us) 00:18:50.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.896 =================================================================================================================== 00:18:50.896 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:50.896 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3403332 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:51.153 rmmod nvme_tcp 00:18:51.153 rmmod nvme_fabrics 00:18:51.153 rmmod nvme_keyring 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 3403180 ']' 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 3403180 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 3403180 ']' 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 3403180 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3403180 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3403180' 00:18:51.153 killing process with pid 3403180 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 3403180 00:18:51.153 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 3403180 00:18:51.411 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:51.411 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:51.411 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:51.411 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:51.411 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:51.411 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:51.411 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:51.411 23:56:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.940 23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:53.941 23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.KQSforYgvC /tmp/tmp.sLYUuCHSaz /tmp/tmp.T8tnZHv9yR 00:18:53.941 00:18:53.941 real 1m20.703s 00:18:53.941 user 2m7.142s 00:18:53.941 sys 0m27.316s 00:18:53.941 23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:53.941 23:56:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.941 ************************************ 00:18:53.941 END TEST nvmf_tls 00:18:53.941 ************************************ 00:18:53.941 23:56:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:53.941 23:56:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:53.941 23:56:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.941 23:56:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:53.941 ************************************ 00:18:53.941 START TEST nvmf_fips 00:18:53.941 ************************************ 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:53.941 * Looking for test storage... 00:18:53.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:53.941 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:18:53.942 Error setting digest 00:18:53.942 00A285D9567F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:53.942 00A285D9567F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:53.942 23:56:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:55.837 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:55.837 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:55.837 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.837 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:55.838 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:55.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:18:55.838 00:18:55.838 --- 10.0.0.2 ping statistics --- 00:18:55.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.838 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:18:55.838 00:18:55.838 --- 10.0.0.1 ping statistics --- 00:18:55.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.838 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=3405696 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 3405696 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3405696 ']' 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.838 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:56.095 [2024-07-24 23:56:26.497985] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:56.095 [2024-07-24 23:56:26.498062] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:56.095 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.095 [2024-07-24 23:56:26.565887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.095 [2024-07-24 23:56:26.681006] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:56.096 [2024-07-24 23:56:26.681068] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:56.096 [2024-07-24 23:56:26.681084] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:56.096 [2024-07-24 23:56:26.681098] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:56.096 [2024-07-24 23:56:26.681109] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:56.096 [2024-07-24 23:56:26.681141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.352 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:56.352 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:56.352 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:56.352 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:56.352 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:56.352 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.352 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:56.352 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:56.352 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:56.352 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:56.352 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:56.353 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:56.353 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:56.353 23:56:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:56.610 [2024-07-24 23:56:27.050609] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.610 [2024-07-24 23:56:27.066584] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:56.610 [2024-07-24 23:56:27.066827] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.610 [2024-07-24 23:56:27.099139] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:56.610 malloc0 00:18:56.610 23:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:56.610 23:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=3405725 00:18:56.610 23:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 3405725 /var/tmp/bdevperf.sock 00:18:56.610 23:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 3405725 ']' 00:18:56.610 23:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:56.610 23:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:56.610 23:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:56.610 23:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:56.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:56.610 23:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:56.610 23:56:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:56.610 [2024-07-24 23:56:27.199385] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:18:56.610 [2024-07-24 23:56:27.199474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3405725 ] 00:18:56.868 EAL: No free 2048 kB hugepages reported on node 1 00:18:56.868 [2024-07-24 23:56:27.264672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:56.868 [2024-07-24 23:56:27.377847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:57.799 23:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:57.799 23:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:18:57.799 23:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:58.056 [2024-07-24 23:56:28.450468] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:58.056 [2024-07-24 23:56:28.450603] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:58.056 TLSTESTn1 00:18:58.056 23:56:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:58.056 Running I/O for 10 seconds... 00:19:10.246 00:19:10.246 Latency(us) 00:19:10.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.246 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:10.246 Verification LBA range: start 0x0 length 0x2000 00:19:10.246 TLSTESTn1 : 10.03 3571.88 13.95 0.00 0.00 35765.88 10097.40 44661.57 00:19:10.246 =================================================================================================================== 00:19:10.246 Total : 3571.88 13.95 0.00 0.00 35765.88 10097.40 44661.57 00:19:10.246 0 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:10.246 nvmf_trace.0 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3405725 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3405725 ']' 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3405725 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3405725 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3405725' 00:19:10.246 killing process with pid 3405725 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3405725 00:19:10.246 Received shutdown signal, test time was about 10.000000 seconds 00:19:10.246 00:19:10.246 Latency(us) 00:19:10.246 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.246 =================================================================================================================== 00:19:10.246 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:10.246 [2024-07-24 23:56:38.825238] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:10.246 23:56:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3405725 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:10.246 rmmod nvme_tcp 00:19:10.246 rmmod nvme_fabrics 00:19:10.246 rmmod nvme_keyring 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 3405696 ']' 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 3405696 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 3405696 ']' 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 3405696 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3405696 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:10.246 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3405696' 00:19:10.246 killing process with pid 3405696 00:19:10.247 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 3405696 00:19:10.247 [2024-07-24 23:56:39.155027] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:10.247 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 3405696 00:19:10.247 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:10.247 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:10.247 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:10.247 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:10.247 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:10.247 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.247 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:10.247 23:56:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:11.180 23:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:11.180 23:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:19:11.180 00:19:11.180 real 0m17.488s 00:19:11.180 user 0m23.587s 00:19:11.180 sys 0m5.353s 00:19:11.180 23:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:11.180 23:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:11.180 ************************************ 00:19:11.180 END TEST nvmf_fips 00:19:11.180 ************************************ 00:19:11.180 23:56:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:19:11.180 23:56:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:19:11.181 23:56:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:19:11.181 23:56:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:19:11.181 23:56:41 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:19:11.181 23:56:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:13.075 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:13.075 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:13.075 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:13.075 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.075 ************************************ 00:19:13.075 START TEST nvmf_perf_adq 00:19:13.075 ************************************ 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:13.075 * Looking for test storage... 00:19:13.075 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.075 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:13.076 23:56:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:14.973 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:14.973 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:14.973 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:14.973 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:14.973 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:14.973 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:14.974 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:14.974 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:14.974 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:14.974 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:19:14.974 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:15.539 23:56:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:17.438 23:56:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:22.700 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:22.701 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:22.701 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:22.701 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:22.701 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:22.701 23:56:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:22.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:19:22.701 00:19:22.701 --- 10.0.0.2 ping statistics --- 00:19:22.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.701 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:22.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:19:22.701 00:19:22.701 --- 10.0.0.1 ping statistics --- 00:19:22.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.701 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.701 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3411588 00:19:22.702 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:22.702 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3411588 00:19:22.702 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3411588 ']' 00:19:22.702 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.702 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:22.702 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.702 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:22.702 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.702 [2024-07-24 23:56:53.136992] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:19:22.702 [2024-07-24 23:56:53.137081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.702 EAL: No free 2048 kB hugepages reported on node 1 00:19:22.702 [2024-07-24 23:56:53.211692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:22.959 [2024-07-24 23:56:53.335472] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.959 [2024-07-24 23:56:53.335526] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.959 [2024-07-24 23:56:53.335542] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.959 [2024-07-24 23:56:53.335556] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.959 [2024-07-24 23:56:53.335567] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.959 [2024-07-24 23:56:53.335658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.959 [2024-07-24 23:56:53.335732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.959 [2024-07-24 23:56:53.335692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.959 [2024-07-24 23:56:53.335728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.959 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.216 [2024-07-24 23:56:53.575432] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.216 Malloc1 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.216 [2024-07-24 23:56:53.628485] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=3411621 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:23.216 23:56:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:23.216 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.113 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:25.113 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.113 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:25.113 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.113 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:25.113 "tick_rate": 2700000000, 00:19:25.113 "poll_groups": [ 00:19:25.113 { 00:19:25.113 "name": "nvmf_tgt_poll_group_000", 00:19:25.113 "admin_qpairs": 1, 00:19:25.113 "io_qpairs": 1, 00:19:25.113 "current_admin_qpairs": 1, 00:19:25.113 "current_io_qpairs": 1, 00:19:25.113 "pending_bdev_io": 0, 00:19:25.113 "completed_nvme_io": 20242, 00:19:25.113 "transports": [ 00:19:25.113 { 00:19:25.113 "trtype": "TCP" 00:19:25.113 } 00:19:25.113 ] 00:19:25.113 }, 00:19:25.113 { 00:19:25.113 "name": "nvmf_tgt_poll_group_001", 00:19:25.113 "admin_qpairs": 0, 00:19:25.113 "io_qpairs": 1, 00:19:25.113 "current_admin_qpairs": 0, 00:19:25.113 "current_io_qpairs": 1, 00:19:25.113 "pending_bdev_io": 0, 00:19:25.113 "completed_nvme_io": 20895, 00:19:25.113 "transports": [ 00:19:25.113 { 00:19:25.113 "trtype": "TCP" 00:19:25.113 } 00:19:25.113 ] 00:19:25.113 }, 00:19:25.113 { 00:19:25.113 "name": "nvmf_tgt_poll_group_002", 00:19:25.113 "admin_qpairs": 0, 00:19:25.113 "io_qpairs": 1, 00:19:25.113 "current_admin_qpairs": 0, 00:19:25.113 "current_io_qpairs": 1, 00:19:25.113 "pending_bdev_io": 0, 00:19:25.113 "completed_nvme_io": 20370, 00:19:25.113 "transports": [ 00:19:25.113 { 00:19:25.113 "trtype": "TCP" 00:19:25.113 } 00:19:25.113 ] 00:19:25.113 }, 00:19:25.113 { 00:19:25.113 "name": "nvmf_tgt_poll_group_003", 00:19:25.113 "admin_qpairs": 0, 00:19:25.113 "io_qpairs": 1, 00:19:25.113 "current_admin_qpairs": 0, 00:19:25.113 "current_io_qpairs": 1, 00:19:25.113 "pending_bdev_io": 0, 00:19:25.113 "completed_nvme_io": 20054, 00:19:25.113 "transports": [ 00:19:25.113 { 00:19:25.113 "trtype": "TCP" 00:19:25.113 } 00:19:25.113 ] 00:19:25.113 } 00:19:25.113 ] 00:19:25.113 }' 00:19:25.113 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:25.113 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:25.113 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:25.113 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:25.113 23:56:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 3411621 00:19:33.215 Initializing NVMe Controllers 00:19:33.215 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:33.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:33.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:33.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:33.215 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:33.215 Initialization complete. Launching workers. 00:19:33.215 ======================================================== 00:19:33.215 Latency(us) 00:19:33.215 Device Information : IOPS MiB/s Average min max 00:19:33.215 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10529.84 41.13 6077.91 2430.31 8097.12 00:19:33.215 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10992.73 42.94 5821.55 2961.66 8334.17 00:19:33.215 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10716.84 41.86 5972.06 3381.92 8133.75 00:19:33.216 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10616.94 41.47 6027.65 1661.52 8939.38 00:19:33.216 ======================================================== 00:19:33.216 Total : 42856.35 167.41 5973.24 1661.52 8939.38 00:19:33.216 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:33.216 rmmod nvme_tcp 00:19:33.216 rmmod nvme_fabrics 00:19:33.216 rmmod nvme_keyring 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3411588 ']' 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3411588 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3411588 ']' 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3411588 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:33.216 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3411588 00:19:33.474 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:33.474 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:33.474 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3411588' 00:19:33.474 killing process with pid 3411588 00:19:33.474 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3411588 00:19:33.474 23:57:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3411588 00:19:33.732 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:33.732 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:33.732 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:33.732 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:33.732 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:33.732 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:33.732 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:33.732 23:57:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.661 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:35.661 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:35.661 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:36.225 23:57:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:38.751 23:57:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:44.013 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:44.014 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:44.014 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:44.014 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:44.014 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:44.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:19:44.014 00:19:44.014 --- 10.0.0.2 ping statistics --- 00:19:44.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.014 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:44.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:19:44.014 00:19:44.014 --- 10.0.0.1 ping statistics --- 00:19:44.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.014 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:44.014 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.015 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:44.015 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:44.015 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:44.015 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:44.015 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:44.015 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:44.015 net.core.busy_poll = 1 00:19:44.015 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:44.015 net.core.busy_read = 1 00:19:44.015 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:44.015 23:57:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=3414865 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 3414865 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 3414865 ']' 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.015 [2024-07-24 23:57:14.124019] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:19:44.015 [2024-07-24 23:57:14.124114] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.015 EAL: No free 2048 kB hugepages reported on node 1 00:19:44.015 [2024-07-24 23:57:14.193095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:44.015 [2024-07-24 23:57:14.308523] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.015 [2024-07-24 23:57:14.308592] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.015 [2024-07-24 23:57:14.308613] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.015 [2024-07-24 23:57:14.308630] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.015 [2024-07-24 23:57:14.308643] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.015 [2024-07-24 23:57:14.308734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.015 [2024-07-24 23:57:14.308798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.015 [2024-07-24 23:57:14.308871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.015 [2024-07-24 23:57:14.308863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.015 [2024-07-24 23:57:14.566475] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.015 Malloc1 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.015 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.016 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:44.016 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.016 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.016 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.016 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.016 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.016 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:44.016 [2024-07-24 23:57:14.619864] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.016 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.273 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=3415015 00:19:44.273 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:44.273 23:57:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:44.273 EAL: No free 2048 kB hugepages reported on node 1 00:19:46.171 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:46.171 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.171 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.171 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.171 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:46.171 "tick_rate": 2700000000, 00:19:46.171 "poll_groups": [ 00:19:46.171 { 00:19:46.171 "name": "nvmf_tgt_poll_group_000", 00:19:46.171 "admin_qpairs": 1, 00:19:46.171 "io_qpairs": 4, 00:19:46.171 "current_admin_qpairs": 1, 00:19:46.171 "current_io_qpairs": 4, 00:19:46.171 "pending_bdev_io": 0, 00:19:46.171 "completed_nvme_io": 33520, 00:19:46.171 "transports": [ 00:19:46.171 { 00:19:46.171 "trtype": "TCP" 00:19:46.171 } 00:19:46.171 ] 00:19:46.171 }, 00:19:46.171 { 00:19:46.171 "name": "nvmf_tgt_poll_group_001", 00:19:46.171 "admin_qpairs": 0, 00:19:46.171 "io_qpairs": 0, 00:19:46.171 "current_admin_qpairs": 0, 00:19:46.171 "current_io_qpairs": 0, 00:19:46.171 "pending_bdev_io": 0, 00:19:46.171 "completed_nvme_io": 0, 00:19:46.171 "transports": [ 00:19:46.171 { 00:19:46.171 "trtype": "TCP" 00:19:46.171 } 00:19:46.171 ] 00:19:46.171 }, 00:19:46.171 { 00:19:46.171 "name": "nvmf_tgt_poll_group_002", 00:19:46.171 "admin_qpairs": 0, 00:19:46.171 "io_qpairs": 0, 00:19:46.171 "current_admin_qpairs": 0, 00:19:46.171 "current_io_qpairs": 0, 00:19:46.171 "pending_bdev_io": 0, 00:19:46.171 "completed_nvme_io": 0, 00:19:46.171 "transports": [ 00:19:46.171 { 00:19:46.171 "trtype": "TCP" 00:19:46.171 } 00:19:46.171 ] 00:19:46.171 }, 00:19:46.171 { 00:19:46.171 "name": "nvmf_tgt_poll_group_003", 00:19:46.171 "admin_qpairs": 0, 00:19:46.171 "io_qpairs": 0, 00:19:46.171 "current_admin_qpairs": 0, 00:19:46.171 "current_io_qpairs": 0, 00:19:46.171 "pending_bdev_io": 0, 00:19:46.171 "completed_nvme_io": 0, 00:19:46.171 "transports": [ 00:19:46.171 { 00:19:46.171 "trtype": "TCP" 00:19:46.171 } 00:19:46.171 ] 00:19:46.171 } 00:19:46.171 ] 00:19:46.171 }' 00:19:46.171 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:46.171 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:46.171 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=3 00:19:46.171 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 3 -lt 2 ]] 00:19:46.171 23:57:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 3415015 00:19:54.268 Initializing NVMe Controllers 00:19:54.268 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:54.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:54.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:54.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:54.268 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:54.268 Initialization complete. Launching workers. 00:19:54.268 ======================================================== 00:19:54.268 Latency(us) 00:19:54.268 Device Information : IOPS MiB/s Average min max 00:19:54.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4597.70 17.96 13972.13 2163.82 60632.09 00:19:54.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4648.40 18.16 13779.20 2676.66 63022.35 00:19:54.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4057.40 15.85 15785.81 2336.81 66195.50 00:19:54.268 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4413.60 17.24 14510.53 2188.28 61906.29 00:19:54.268 ======================================================== 00:19:54.268 Total : 17717.10 69.21 14470.99 2163.82 66195.50 00:19:54.268 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:54.268 rmmod nvme_tcp 00:19:54.268 rmmod nvme_fabrics 00:19:54.268 rmmod nvme_keyring 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 3414865 ']' 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 3414865 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 3414865 ']' 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 3414865 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:54.268 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3414865 00:19:54.525 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:54.525 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:54.525 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3414865' 00:19:54.525 killing process with pid 3414865 00:19:54.525 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 3414865 00:19:54.525 23:57:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 3414865 00:19:54.783 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:54.783 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:54.783 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:54.783 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.783 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:54.783 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.783 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.783 23:57:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:58.064 00:19:58.064 real 0m44.877s 00:19:58.064 user 2m39.766s 00:19:58.064 sys 0m9.155s 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:58.064 ************************************ 00:19:58.064 END TEST nvmf_perf_adq 00:19:58.064 ************************************ 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:58.064 ************************************ 00:19:58.064 START TEST nvmf_shutdown 00:19:58.064 ************************************ 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:58.064 * Looking for test storage... 00:19:58.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:58.064 ************************************ 00:19:58.064 START TEST nvmf_shutdown_tc1 00:19:58.064 ************************************ 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:58.064 23:57:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:59.963 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:59.963 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:59.963 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:59.963 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:59.963 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:59.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:59.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:19:59.964 00:19:59.964 --- 10.0.0.2 ping statistics --- 00:19:59.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.964 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:59.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:59.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:19:59.964 00:19:59.964 --- 10.0.0.1 ping statistics --- 00:19:59.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:59.964 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=3418298 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 3418298 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3418298 ']' 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:59.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:59.964 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:59.964 [2024-07-24 23:57:30.436268] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:19:59.964 [2024-07-24 23:57:30.436343] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:59.964 EAL: No free 2048 kB hugepages reported on node 1 00:19:59.964 [2024-07-24 23:57:30.504697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.272 [2024-07-24 23:57:30.626765] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.272 [2024-07-24 23:57:30.626821] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.272 [2024-07-24 23:57:30.626837] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.272 [2024-07-24 23:57:30.626849] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.272 [2024-07-24 23:57:30.626859] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.272 [2024-07-24 23:57:30.627113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.272 [2024-07-24 23:57:30.630262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.272 [2024-07-24 23:57:30.630309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:00.272 [2024-07-24 23:57:30.630313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:00.272 [2024-07-24 23:57:30.793870] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.272 23:57:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:00.530 Malloc1 00:20:00.530 [2024-07-24 23:57:30.883338] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:00.530 Malloc2 00:20:00.530 Malloc3 00:20:00.530 Malloc4 00:20:00.530 Malloc5 00:20:00.530 Malloc6 00:20:00.789 Malloc7 00:20:00.789 Malloc8 00:20:00.789 Malloc9 00:20:00.789 Malloc10 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=3418452 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 3418452 /var/tmp/bdevperf.sock 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 3418452 ']' 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:00.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.789 { 00:20:00.789 "params": { 00:20:00.789 "name": "Nvme$subsystem", 00:20:00.789 "trtype": "$TEST_TRANSPORT", 00:20:00.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.789 "adrfam": "ipv4", 00:20:00.789 "trsvcid": "$NVMF_PORT", 00:20:00.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.789 "hdgst": ${hdgst:-false}, 00:20:00.789 "ddgst": ${ddgst:-false} 00:20:00.789 }, 00:20:00.789 "method": "bdev_nvme_attach_controller" 00:20:00.789 } 00:20:00.789 EOF 00:20:00.789 )") 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.789 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.790 { 00:20:00.790 "params": { 00:20:00.790 "name": "Nvme$subsystem", 00:20:00.790 "trtype": "$TEST_TRANSPORT", 00:20:00.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.790 "adrfam": "ipv4", 00:20:00.790 "trsvcid": "$NVMF_PORT", 00:20:00.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.790 "hdgst": ${hdgst:-false}, 00:20:00.790 "ddgst": ${ddgst:-false} 00:20:00.790 }, 00:20:00.790 "method": "bdev_nvme_attach_controller" 00:20:00.790 } 00:20:00.790 EOF 00:20:00.790 )") 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.790 { 00:20:00.790 "params": { 00:20:00.790 "name": "Nvme$subsystem", 00:20:00.790 "trtype": "$TEST_TRANSPORT", 00:20:00.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.790 "adrfam": "ipv4", 00:20:00.790 "trsvcid": "$NVMF_PORT", 00:20:00.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.790 "hdgst": ${hdgst:-false}, 00:20:00.790 "ddgst": ${ddgst:-false} 00:20:00.790 }, 00:20:00.790 "method": "bdev_nvme_attach_controller" 00:20:00.790 } 00:20:00.790 EOF 00:20:00.790 )") 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.790 { 00:20:00.790 "params": { 00:20:00.790 "name": "Nvme$subsystem", 00:20:00.790 "trtype": "$TEST_TRANSPORT", 00:20:00.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.790 "adrfam": "ipv4", 00:20:00.790 "trsvcid": "$NVMF_PORT", 00:20:00.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.790 "hdgst": ${hdgst:-false}, 00:20:00.790 "ddgst": ${ddgst:-false} 00:20:00.790 }, 00:20:00.790 "method": "bdev_nvme_attach_controller" 00:20:00.790 } 00:20:00.790 EOF 00:20:00.790 )") 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.790 { 00:20:00.790 "params": { 00:20:00.790 "name": "Nvme$subsystem", 00:20:00.790 "trtype": "$TEST_TRANSPORT", 00:20:00.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.790 "adrfam": "ipv4", 00:20:00.790 "trsvcid": "$NVMF_PORT", 00:20:00.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.790 "hdgst": ${hdgst:-false}, 00:20:00.790 "ddgst": ${ddgst:-false} 00:20:00.790 }, 00:20:00.790 "method": "bdev_nvme_attach_controller" 00:20:00.790 } 00:20:00.790 EOF 00:20:00.790 )") 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.790 { 00:20:00.790 "params": { 00:20:00.790 "name": "Nvme$subsystem", 00:20:00.790 "trtype": "$TEST_TRANSPORT", 00:20:00.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.790 "adrfam": "ipv4", 00:20:00.790 "trsvcid": "$NVMF_PORT", 00:20:00.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.790 "hdgst": ${hdgst:-false}, 00:20:00.790 "ddgst": ${ddgst:-false} 00:20:00.790 }, 00:20:00.790 "method": "bdev_nvme_attach_controller" 00:20:00.790 } 00:20:00.790 EOF 00:20:00.790 )") 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.790 { 00:20:00.790 "params": { 00:20:00.790 "name": "Nvme$subsystem", 00:20:00.790 "trtype": "$TEST_TRANSPORT", 00:20:00.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.790 "adrfam": "ipv4", 00:20:00.790 "trsvcid": "$NVMF_PORT", 00:20:00.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.790 "hdgst": ${hdgst:-false}, 00:20:00.790 "ddgst": ${ddgst:-false} 00:20:00.790 }, 00:20:00.790 "method": "bdev_nvme_attach_controller" 00:20:00.790 } 00:20:00.790 EOF 00:20:00.790 )") 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.790 { 00:20:00.790 "params": { 00:20:00.790 "name": "Nvme$subsystem", 00:20:00.790 "trtype": "$TEST_TRANSPORT", 00:20:00.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.790 "adrfam": "ipv4", 00:20:00.790 "trsvcid": "$NVMF_PORT", 00:20:00.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.790 "hdgst": ${hdgst:-false}, 00:20:00.790 "ddgst": ${ddgst:-false} 00:20:00.790 }, 00:20:00.790 "method": "bdev_nvme_attach_controller" 00:20:00.790 } 00:20:00.790 EOF 00:20:00.790 )") 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.790 { 00:20:00.790 "params": { 00:20:00.790 "name": "Nvme$subsystem", 00:20:00.790 "trtype": "$TEST_TRANSPORT", 00:20:00.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.790 "adrfam": "ipv4", 00:20:00.790 "trsvcid": "$NVMF_PORT", 00:20:00.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.790 "hdgst": ${hdgst:-false}, 00:20:00.790 "ddgst": ${ddgst:-false} 00:20:00.790 }, 00:20:00.790 "method": "bdev_nvme_attach_controller" 00:20:00.790 } 00:20:00.790 EOF 00:20:00.790 )") 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:00.790 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:00.790 { 00:20:00.790 "params": { 00:20:00.790 "name": "Nvme$subsystem", 00:20:00.790 "trtype": "$TEST_TRANSPORT", 00:20:00.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:00.790 "adrfam": "ipv4", 00:20:00.790 "trsvcid": "$NVMF_PORT", 00:20:00.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:00.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:00.790 "hdgst": ${hdgst:-false}, 00:20:00.790 "ddgst": ${ddgst:-false} 00:20:00.790 }, 00:20:00.791 "method": "bdev_nvme_attach_controller" 00:20:00.791 } 00:20:00.791 EOF 00:20:00.791 )") 00:20:00.791 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:00.791 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:00.791 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:00.791 23:57:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:00.791 "params": { 00:20:00.791 "name": "Nvme1", 00:20:00.791 "trtype": "tcp", 00:20:00.791 "traddr": "10.0.0.2", 00:20:00.791 "adrfam": "ipv4", 00:20:00.791 "trsvcid": "4420", 00:20:00.791 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.791 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:00.791 "hdgst": false, 00:20:00.791 "ddgst": false 00:20:00.791 }, 00:20:00.791 "method": "bdev_nvme_attach_controller" 00:20:00.791 },{ 00:20:00.791 "params": { 00:20:00.791 "name": "Nvme2", 00:20:00.791 "trtype": "tcp", 00:20:00.791 "traddr": "10.0.0.2", 00:20:00.791 "adrfam": "ipv4", 00:20:00.791 "trsvcid": "4420", 00:20:00.791 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:00.791 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:00.791 "hdgst": false, 00:20:00.791 "ddgst": false 00:20:00.791 }, 00:20:00.791 "method": "bdev_nvme_attach_controller" 00:20:00.791 },{ 00:20:00.791 "params": { 00:20:00.791 "name": "Nvme3", 00:20:00.791 "trtype": "tcp", 00:20:00.791 "traddr": "10.0.0.2", 00:20:00.791 "adrfam": "ipv4", 00:20:00.791 "trsvcid": "4420", 00:20:00.791 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:00.791 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:00.791 "hdgst": false, 00:20:00.791 "ddgst": false 00:20:00.791 }, 00:20:00.791 "method": "bdev_nvme_attach_controller" 00:20:00.791 },{ 00:20:00.791 "params": { 00:20:00.791 "name": "Nvme4", 00:20:00.791 "trtype": "tcp", 00:20:00.791 "traddr": "10.0.0.2", 00:20:00.791 "adrfam": "ipv4", 00:20:00.791 "trsvcid": "4420", 00:20:00.791 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:00.791 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:00.791 "hdgst": false, 00:20:00.791 "ddgst": false 00:20:00.791 }, 00:20:00.791 "method": "bdev_nvme_attach_controller" 00:20:00.791 },{ 00:20:00.791 "params": { 00:20:00.791 "name": "Nvme5", 00:20:00.791 "trtype": "tcp", 00:20:00.791 "traddr": "10.0.0.2", 00:20:00.791 "adrfam": "ipv4", 00:20:00.791 "trsvcid": "4420", 00:20:00.791 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:00.791 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:00.791 "hdgst": false, 00:20:00.791 "ddgst": false 00:20:00.791 }, 00:20:00.791 "method": "bdev_nvme_attach_controller" 00:20:00.791 },{ 00:20:00.791 "params": { 00:20:00.791 "name": "Nvme6", 00:20:00.791 "trtype": "tcp", 00:20:00.791 "traddr": "10.0.0.2", 00:20:00.791 "adrfam": "ipv4", 00:20:00.791 "trsvcid": "4420", 00:20:00.791 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:00.791 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:00.791 "hdgst": false, 00:20:00.791 "ddgst": false 00:20:00.791 }, 00:20:00.791 "method": "bdev_nvme_attach_controller" 00:20:00.791 },{ 00:20:00.791 "params": { 00:20:00.791 "name": "Nvme7", 00:20:00.791 "trtype": "tcp", 00:20:00.791 "traddr": "10.0.0.2", 00:20:00.791 "adrfam": "ipv4", 00:20:00.791 "trsvcid": "4420", 00:20:00.791 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:00.791 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:00.791 "hdgst": false, 00:20:00.791 "ddgst": false 00:20:00.791 }, 00:20:00.791 "method": "bdev_nvme_attach_controller" 00:20:00.791 },{ 00:20:00.791 "params": { 00:20:00.791 "name": "Nvme8", 00:20:00.791 "trtype": "tcp", 00:20:00.791 "traddr": "10.0.0.2", 00:20:00.791 "adrfam": "ipv4", 00:20:00.791 "trsvcid": "4420", 00:20:00.791 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:00.791 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:00.791 "hdgst": false, 00:20:00.791 "ddgst": false 00:20:00.791 }, 00:20:00.791 "method": "bdev_nvme_attach_controller" 00:20:00.791 },{ 00:20:00.791 "params": { 00:20:00.791 "name": "Nvme9", 00:20:00.791 "trtype": "tcp", 00:20:00.791 "traddr": "10.0.0.2", 00:20:00.791 "adrfam": "ipv4", 00:20:00.791 "trsvcid": "4420", 00:20:00.791 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:00.791 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:00.791 "hdgst": false, 00:20:00.791 "ddgst": false 00:20:00.791 }, 00:20:00.791 "method": "bdev_nvme_attach_controller" 00:20:00.791 },{ 00:20:00.791 "params": { 00:20:00.791 "name": "Nvme10", 00:20:00.791 "trtype": "tcp", 00:20:00.791 "traddr": "10.0.0.2", 00:20:00.791 "adrfam": "ipv4", 00:20:00.791 "trsvcid": "4420", 00:20:00.791 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:00.791 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:00.791 "hdgst": false, 00:20:00.791 "ddgst": false 00:20:00.791 }, 00:20:00.791 "method": "bdev_nvme_attach_controller" 00:20:00.791 }' 00:20:01.049 [2024-07-24 23:57:31.400738] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:20:01.049 [2024-07-24 23:57:31.400815] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:01.049 EAL: No free 2048 kB hugepages reported on node 1 00:20:01.049 [2024-07-24 23:57:31.463161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.049 [2024-07-24 23:57:31.573590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.946 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:02.946 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:20:02.947 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:02.947 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.947 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:02.947 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.947 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 3418452 00:20:02.947 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:20:02.947 23:57:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:20:03.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 3418452 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 3418298 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.879 { 00:20:03.879 "params": { 00:20:03.879 "name": "Nvme$subsystem", 00:20:03.879 "trtype": "$TEST_TRANSPORT", 00:20:03.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.879 "adrfam": "ipv4", 00:20:03.879 "trsvcid": "$NVMF_PORT", 00:20:03.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.879 "hdgst": ${hdgst:-false}, 00:20:03.879 "ddgst": ${ddgst:-false} 00:20:03.879 }, 00:20:03.879 "method": "bdev_nvme_attach_controller" 00:20:03.879 } 00:20:03.879 EOF 00:20:03.879 )") 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.879 { 00:20:03.879 "params": { 00:20:03.879 "name": "Nvme$subsystem", 00:20:03.879 "trtype": "$TEST_TRANSPORT", 00:20:03.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.879 "adrfam": "ipv4", 00:20:03.879 "trsvcid": "$NVMF_PORT", 00:20:03.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.879 "hdgst": ${hdgst:-false}, 00:20:03.879 "ddgst": ${ddgst:-false} 00:20:03.879 }, 00:20:03.879 "method": "bdev_nvme_attach_controller" 00:20:03.879 } 00:20:03.879 EOF 00:20:03.879 )") 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.879 { 00:20:03.879 "params": { 00:20:03.879 "name": "Nvme$subsystem", 00:20:03.879 "trtype": "$TEST_TRANSPORT", 00:20:03.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.879 "adrfam": "ipv4", 00:20:03.879 "trsvcid": "$NVMF_PORT", 00:20:03.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.879 "hdgst": ${hdgst:-false}, 00:20:03.879 "ddgst": ${ddgst:-false} 00:20:03.879 }, 00:20:03.879 "method": "bdev_nvme_attach_controller" 00:20:03.879 } 00:20:03.879 EOF 00:20:03.879 )") 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.879 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.880 { 00:20:03.880 "params": { 00:20:03.880 "name": "Nvme$subsystem", 00:20:03.880 "trtype": "$TEST_TRANSPORT", 00:20:03.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.880 "adrfam": "ipv4", 00:20:03.880 "trsvcid": "$NVMF_PORT", 00:20:03.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.880 "hdgst": ${hdgst:-false}, 00:20:03.880 "ddgst": ${ddgst:-false} 00:20:03.880 }, 00:20:03.880 "method": "bdev_nvme_attach_controller" 00:20:03.880 } 00:20:03.880 EOF 00:20:03.880 )") 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.880 { 00:20:03.880 "params": { 00:20:03.880 "name": "Nvme$subsystem", 00:20:03.880 "trtype": "$TEST_TRANSPORT", 00:20:03.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.880 "adrfam": "ipv4", 00:20:03.880 "trsvcid": "$NVMF_PORT", 00:20:03.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.880 "hdgst": ${hdgst:-false}, 00:20:03.880 "ddgst": ${ddgst:-false} 00:20:03.880 }, 00:20:03.880 "method": "bdev_nvme_attach_controller" 00:20:03.880 } 00:20:03.880 EOF 00:20:03.880 )") 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.880 { 00:20:03.880 "params": { 00:20:03.880 "name": "Nvme$subsystem", 00:20:03.880 "trtype": "$TEST_TRANSPORT", 00:20:03.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.880 "adrfam": "ipv4", 00:20:03.880 "trsvcid": "$NVMF_PORT", 00:20:03.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.880 "hdgst": ${hdgst:-false}, 00:20:03.880 "ddgst": ${ddgst:-false} 00:20:03.880 }, 00:20:03.880 "method": "bdev_nvme_attach_controller" 00:20:03.880 } 00:20:03.880 EOF 00:20:03.880 )") 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.880 { 00:20:03.880 "params": { 00:20:03.880 "name": "Nvme$subsystem", 00:20:03.880 "trtype": "$TEST_TRANSPORT", 00:20:03.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.880 "adrfam": "ipv4", 00:20:03.880 "trsvcid": "$NVMF_PORT", 00:20:03.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.880 "hdgst": ${hdgst:-false}, 00:20:03.880 "ddgst": ${ddgst:-false} 00:20:03.880 }, 00:20:03.880 "method": "bdev_nvme_attach_controller" 00:20:03.880 } 00:20:03.880 EOF 00:20:03.880 )") 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.880 { 00:20:03.880 "params": { 00:20:03.880 "name": "Nvme$subsystem", 00:20:03.880 "trtype": "$TEST_TRANSPORT", 00:20:03.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.880 "adrfam": "ipv4", 00:20:03.880 "trsvcid": "$NVMF_PORT", 00:20:03.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.880 "hdgst": ${hdgst:-false}, 00:20:03.880 "ddgst": ${ddgst:-false} 00:20:03.880 }, 00:20:03.880 "method": "bdev_nvme_attach_controller" 00:20:03.880 } 00:20:03.880 EOF 00:20:03.880 )") 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.880 { 00:20:03.880 "params": { 00:20:03.880 "name": "Nvme$subsystem", 00:20:03.880 "trtype": "$TEST_TRANSPORT", 00:20:03.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.880 "adrfam": "ipv4", 00:20:03.880 "trsvcid": "$NVMF_PORT", 00:20:03.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.880 "hdgst": ${hdgst:-false}, 00:20:03.880 "ddgst": ${ddgst:-false} 00:20:03.880 }, 00:20:03.880 "method": "bdev_nvme_attach_controller" 00:20:03.880 } 00:20:03.880 EOF 00:20:03.880 )") 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:03.880 { 00:20:03.880 "params": { 00:20:03.880 "name": "Nvme$subsystem", 00:20:03.880 "trtype": "$TEST_TRANSPORT", 00:20:03.880 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:03.880 "adrfam": "ipv4", 00:20:03.880 "trsvcid": "$NVMF_PORT", 00:20:03.880 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:03.880 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:03.880 "hdgst": ${hdgst:-false}, 00:20:03.880 "ddgst": ${ddgst:-false} 00:20:03.880 }, 00:20:03.880 "method": "bdev_nvme_attach_controller" 00:20:03.880 } 00:20:03.880 EOF 00:20:03.880 )") 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:20:03.880 23:57:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:03.880 "params": { 00:20:03.880 "name": "Nvme1", 00:20:03.880 "trtype": "tcp", 00:20:03.880 "traddr": "10.0.0.2", 00:20:03.880 "adrfam": "ipv4", 00:20:03.880 "trsvcid": "4420", 00:20:03.880 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.880 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.880 "hdgst": false, 00:20:03.880 "ddgst": false 00:20:03.880 }, 00:20:03.880 "method": "bdev_nvme_attach_controller" 00:20:03.880 },{ 00:20:03.880 "params": { 00:20:03.880 "name": "Nvme2", 00:20:03.880 "trtype": "tcp", 00:20:03.880 "traddr": "10.0.0.2", 00:20:03.880 "adrfam": "ipv4", 00:20:03.880 "trsvcid": "4420", 00:20:03.880 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:03.880 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:03.880 "hdgst": false, 00:20:03.880 "ddgst": false 00:20:03.880 }, 00:20:03.880 "method": "bdev_nvme_attach_controller" 00:20:03.880 },{ 00:20:03.880 "params": { 00:20:03.880 "name": "Nvme3", 00:20:03.880 "trtype": "tcp", 00:20:03.880 "traddr": "10.0.0.2", 00:20:03.880 "adrfam": "ipv4", 00:20:03.880 "trsvcid": "4420", 00:20:03.880 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:03.880 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:03.880 "hdgst": false, 00:20:03.880 "ddgst": false 00:20:03.880 }, 00:20:03.880 "method": "bdev_nvme_attach_controller" 00:20:03.880 },{ 00:20:03.880 "params": { 00:20:03.880 "name": "Nvme4", 00:20:03.880 "trtype": "tcp", 00:20:03.880 "traddr": "10.0.0.2", 00:20:03.880 "adrfam": "ipv4", 00:20:03.880 "trsvcid": "4420", 00:20:03.880 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:03.880 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:03.880 "hdgst": false, 00:20:03.880 "ddgst": false 00:20:03.880 }, 00:20:03.880 "method": "bdev_nvme_attach_controller" 00:20:03.880 },{ 00:20:03.881 "params": { 00:20:03.881 "name": "Nvme5", 00:20:03.881 "trtype": "tcp", 00:20:03.881 "traddr": "10.0.0.2", 00:20:03.881 "adrfam": "ipv4", 00:20:03.881 "trsvcid": "4420", 00:20:03.881 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:03.881 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:03.881 "hdgst": false, 00:20:03.881 "ddgst": false 00:20:03.881 }, 00:20:03.881 "method": "bdev_nvme_attach_controller" 00:20:03.881 },{ 00:20:03.881 "params": { 00:20:03.881 "name": "Nvme6", 00:20:03.881 "trtype": "tcp", 00:20:03.881 "traddr": "10.0.0.2", 00:20:03.881 "adrfam": "ipv4", 00:20:03.881 "trsvcid": "4420", 00:20:03.881 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:03.881 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:03.881 "hdgst": false, 00:20:03.881 "ddgst": false 00:20:03.881 }, 00:20:03.881 "method": "bdev_nvme_attach_controller" 00:20:03.881 },{ 00:20:03.881 "params": { 00:20:03.881 "name": "Nvme7", 00:20:03.881 "trtype": "tcp", 00:20:03.881 "traddr": "10.0.0.2", 00:20:03.881 "adrfam": "ipv4", 00:20:03.881 "trsvcid": "4420", 00:20:03.881 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:03.881 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:03.881 "hdgst": false, 00:20:03.881 "ddgst": false 00:20:03.881 }, 00:20:03.881 "method": "bdev_nvme_attach_controller" 00:20:03.881 },{ 00:20:03.881 "params": { 00:20:03.881 "name": "Nvme8", 00:20:03.881 "trtype": "tcp", 00:20:03.881 "traddr": "10.0.0.2", 00:20:03.881 "adrfam": "ipv4", 00:20:03.881 "trsvcid": "4420", 00:20:03.881 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:03.881 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:03.881 "hdgst": false, 00:20:03.881 "ddgst": false 00:20:03.881 }, 00:20:03.881 "method": "bdev_nvme_attach_controller" 00:20:03.881 },{ 00:20:03.881 "params": { 00:20:03.881 "name": "Nvme9", 00:20:03.881 "trtype": "tcp", 00:20:03.881 "traddr": "10.0.0.2", 00:20:03.881 "adrfam": "ipv4", 00:20:03.881 "trsvcid": "4420", 00:20:03.881 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:03.881 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:03.881 "hdgst": false, 00:20:03.881 "ddgst": false 00:20:03.881 }, 00:20:03.881 "method": "bdev_nvme_attach_controller" 00:20:03.881 },{ 00:20:03.881 "params": { 00:20:03.881 "name": "Nvme10", 00:20:03.881 "trtype": "tcp", 00:20:03.881 "traddr": "10.0.0.2", 00:20:03.881 "adrfam": "ipv4", 00:20:03.881 "trsvcid": "4420", 00:20:03.881 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:03.881 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:03.881 "hdgst": false, 00:20:03.881 "ddgst": false 00:20:03.881 }, 00:20:03.881 "method": "bdev_nvme_attach_controller" 00:20:03.881 }' 00:20:03.881 [2024-07-24 23:57:34.426831] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:20:03.881 [2024-07-24 23:57:34.426920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3418779 ] 00:20:03.881 EAL: No free 2048 kB hugepages reported on node 1 00:20:04.138 [2024-07-24 23:57:34.492095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:04.138 [2024-07-24 23:57:34.603572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.507 Running I/O for 1 seconds... 00:20:06.882 00:20:06.882 Latency(us) 00:20:06.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.882 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.882 Verification LBA range: start 0x0 length 0x400 00:20:06.882 Nvme1n1 : 1.14 224.71 14.04 0.00 0.00 281702.78 22427.88 257872.02 00:20:06.882 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.882 Verification LBA range: start 0x0 length 0x400 00:20:06.882 Nvme2n1 : 1.17 218.98 13.69 0.00 0.00 284898.23 23398.78 278066.82 00:20:06.882 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.882 Verification LBA range: start 0x0 length 0x400 00:20:06.882 Nvme3n1 : 1.16 275.25 17.20 0.00 0.00 221285.34 18641.35 236123.78 00:20:06.882 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.882 Verification LBA range: start 0x0 length 0x400 00:20:06.882 Nvme4n1 : 1.13 225.56 14.10 0.00 0.00 267334.54 20777.34 254765.13 00:20:06.882 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.882 Verification LBA range: start 0x0 length 0x400 00:20:06.882 Nvme5n1 : 1.17 217.99 13.62 0.00 0.00 272531.91 22524.97 270299.59 00:20:06.882 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.882 Verification LBA range: start 0x0 length 0x400 00:20:06.882 Nvme6n1 : 1.16 220.69 13.79 0.00 0.00 264038.40 19903.53 225249.66 00:20:06.882 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.882 Verification LBA range: start 0x0 length 0x400 00:20:06.882 Nvme7n1 : 1.18 271.83 16.99 0.00 0.00 210511.80 9514.86 254765.13 00:20:06.882 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.882 Verification LBA range: start 0x0 length 0x400 00:20:06.882 Nvme8n1 : 1.15 222.76 13.92 0.00 0.00 252785.40 22622.06 260978.92 00:20:06.882 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.882 Verification LBA range: start 0x0 length 0x400 00:20:06.882 Nvme9n1 : 1.16 228.44 14.28 0.00 0.00 240877.29 8592.50 288940.94 00:20:06.882 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:06.882 Verification LBA range: start 0x0 length 0x400 00:20:06.883 Nvme10n1 : 1.18 216.58 13.54 0.00 0.00 252112.97 23204.60 292047.83 00:20:06.883 =================================================================================================================== 00:20:06.883 Total : 2322.79 145.17 0.00 0.00 252910.28 8592.50 292047.83 00:20:06.883 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:20:06.883 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:06.883 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:06.883 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:06.883 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:06.883 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:06.883 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:20:06.883 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:06.883 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:20:06.883 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:06.883 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:06.883 rmmod nvme_tcp 00:20:07.141 rmmod nvme_fabrics 00:20:07.141 rmmod nvme_keyring 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 3418298 ']' 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 3418298 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 3418298 ']' 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 3418298 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3418298 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3418298' 00:20:07.141 killing process with pid 3418298 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 3418298 00:20:07.141 23:57:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 3418298 00:20:07.708 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:07.708 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:07.708 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:07.708 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:07.708 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:07.708 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.708 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.708 23:57:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:09.610 00:20:09.610 real 0m11.813s 00:20:09.610 user 0m33.851s 00:20:09.610 sys 0m3.394s 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:09.610 ************************************ 00:20:09.610 END TEST nvmf_shutdown_tc1 00:20:09.610 ************************************ 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:09.610 ************************************ 00:20:09.610 START TEST nvmf_shutdown_tc2 00:20:09.610 ************************************ 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:09.610 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:09.870 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:09.870 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:09.870 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:09.870 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:09.870 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:09.871 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:09.871 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:20:09.871 00:20:09.871 --- 10.0.0.2 ping statistics --- 00:20:09.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.871 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:09.871 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:09.871 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:20:09.871 00:20:09.871 --- 10.0.0.1 ping statistics --- 00:20:09.871 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:09.871 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3419569 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3419569 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3419569 ']' 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:09.871 23:57:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:09.871 [2024-07-24 23:57:40.440814] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:20:09.871 [2024-07-24 23:57:40.440898] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:09.871 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.129 [2024-07-24 23:57:40.519795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:10.129 [2024-07-24 23:57:40.639277] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.129 [2024-07-24 23:57:40.639332] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.129 [2024-07-24 23:57:40.639363] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.129 [2024-07-24 23:57:40.639375] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.129 [2024-07-24 23:57:40.639394] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.129 [2024-07-24 23:57:40.639461] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.129 [2024-07-24 23:57:40.639487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.129 [2024-07-24 23:57:40.639537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:10.129 [2024-07-24 23:57:40.639540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:11.062 [2024-07-24 23:57:41.463986] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.062 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:11.062 Malloc1 00:20:11.062 [2024-07-24 23:57:41.544050] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.062 Malloc2 00:20:11.062 Malloc3 00:20:11.062 Malloc4 00:20:11.320 Malloc5 00:20:11.320 Malloc6 00:20:11.320 Malloc7 00:20:11.320 Malloc8 00:20:11.320 Malloc9 00:20:11.578 Malloc10 00:20:11.578 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.578 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:11.578 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:11.578 23:57:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=3419863 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 3419863 /var/tmp/bdevperf.sock 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3419863 ']' 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:11.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.578 { 00:20:11.578 "params": { 00:20:11.578 "name": "Nvme$subsystem", 00:20:11.578 "trtype": "$TEST_TRANSPORT", 00:20:11.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.578 "adrfam": "ipv4", 00:20:11.578 "trsvcid": "$NVMF_PORT", 00:20:11.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.578 "hdgst": ${hdgst:-false}, 00:20:11.578 "ddgst": ${ddgst:-false} 00:20:11.578 }, 00:20:11.578 "method": "bdev_nvme_attach_controller" 00:20:11.578 } 00:20:11.578 EOF 00:20:11.578 )") 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.578 { 00:20:11.578 "params": { 00:20:11.578 "name": "Nvme$subsystem", 00:20:11.578 "trtype": "$TEST_TRANSPORT", 00:20:11.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.578 "adrfam": "ipv4", 00:20:11.578 "trsvcid": "$NVMF_PORT", 00:20:11.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.578 "hdgst": ${hdgst:-false}, 00:20:11.578 "ddgst": ${ddgst:-false} 00:20:11.578 }, 00:20:11.578 "method": "bdev_nvme_attach_controller" 00:20:11.578 } 00:20:11.578 EOF 00:20:11.578 )") 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.578 { 00:20:11.578 "params": { 00:20:11.578 "name": "Nvme$subsystem", 00:20:11.578 "trtype": "$TEST_TRANSPORT", 00:20:11.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.578 "adrfam": "ipv4", 00:20:11.578 "trsvcid": "$NVMF_PORT", 00:20:11.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.578 "hdgst": ${hdgst:-false}, 00:20:11.578 "ddgst": ${ddgst:-false} 00:20:11.578 }, 00:20:11.578 "method": "bdev_nvme_attach_controller" 00:20:11.578 } 00:20:11.578 EOF 00:20:11.578 )") 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.578 { 00:20:11.578 "params": { 00:20:11.578 "name": "Nvme$subsystem", 00:20:11.578 "trtype": "$TEST_TRANSPORT", 00:20:11.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.578 "adrfam": "ipv4", 00:20:11.578 "trsvcid": "$NVMF_PORT", 00:20:11.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.578 "hdgst": ${hdgst:-false}, 00:20:11.578 "ddgst": ${ddgst:-false} 00:20:11.578 }, 00:20:11.578 "method": "bdev_nvme_attach_controller" 00:20:11.578 } 00:20:11.578 EOF 00:20:11.578 )") 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.578 { 00:20:11.578 "params": { 00:20:11.578 "name": "Nvme$subsystem", 00:20:11.578 "trtype": "$TEST_TRANSPORT", 00:20:11.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.578 "adrfam": "ipv4", 00:20:11.578 "trsvcid": "$NVMF_PORT", 00:20:11.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.578 "hdgst": ${hdgst:-false}, 00:20:11.578 "ddgst": ${ddgst:-false} 00:20:11.578 }, 00:20:11.578 "method": "bdev_nvme_attach_controller" 00:20:11.578 } 00:20:11.578 EOF 00:20:11.578 )") 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.578 { 00:20:11.578 "params": { 00:20:11.578 "name": "Nvme$subsystem", 00:20:11.578 "trtype": "$TEST_TRANSPORT", 00:20:11.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.578 "adrfam": "ipv4", 00:20:11.578 "trsvcid": "$NVMF_PORT", 00:20:11.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.578 "hdgst": ${hdgst:-false}, 00:20:11.578 "ddgst": ${ddgst:-false} 00:20:11.578 }, 00:20:11.578 "method": "bdev_nvme_attach_controller" 00:20:11.578 } 00:20:11.578 EOF 00:20:11.578 )") 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.578 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.578 { 00:20:11.578 "params": { 00:20:11.578 "name": "Nvme$subsystem", 00:20:11.578 "trtype": "$TEST_TRANSPORT", 00:20:11.578 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.578 "adrfam": "ipv4", 00:20:11.578 "trsvcid": "$NVMF_PORT", 00:20:11.578 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.578 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.578 "hdgst": ${hdgst:-false}, 00:20:11.578 "ddgst": ${ddgst:-false} 00:20:11.578 }, 00:20:11.578 "method": "bdev_nvme_attach_controller" 00:20:11.578 } 00:20:11.578 EOF 00:20:11.579 )") 00:20:11.579 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:11.579 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.579 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.579 { 00:20:11.579 "params": { 00:20:11.579 "name": "Nvme$subsystem", 00:20:11.579 "trtype": "$TEST_TRANSPORT", 00:20:11.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.579 "adrfam": "ipv4", 00:20:11.579 "trsvcid": "$NVMF_PORT", 00:20:11.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.579 "hdgst": ${hdgst:-false}, 00:20:11.579 "ddgst": ${ddgst:-false} 00:20:11.579 }, 00:20:11.579 "method": "bdev_nvme_attach_controller" 00:20:11.579 } 00:20:11.579 EOF 00:20:11.579 )") 00:20:11.579 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:11.579 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.579 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.579 { 00:20:11.579 "params": { 00:20:11.579 "name": "Nvme$subsystem", 00:20:11.579 "trtype": "$TEST_TRANSPORT", 00:20:11.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.579 "adrfam": "ipv4", 00:20:11.579 "trsvcid": "$NVMF_PORT", 00:20:11.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.579 "hdgst": ${hdgst:-false}, 00:20:11.579 "ddgst": ${ddgst:-false} 00:20:11.579 }, 00:20:11.579 "method": "bdev_nvme_attach_controller" 00:20:11.579 } 00:20:11.579 EOF 00:20:11.579 )") 00:20:11.579 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:11.579 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:11.579 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:11.579 { 00:20:11.579 "params": { 00:20:11.579 "name": "Nvme$subsystem", 00:20:11.579 "trtype": "$TEST_TRANSPORT", 00:20:11.579 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.579 "adrfam": "ipv4", 00:20:11.579 "trsvcid": "$NVMF_PORT", 00:20:11.579 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.579 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.579 "hdgst": ${hdgst:-false}, 00:20:11.579 "ddgst": ${ddgst:-false} 00:20:11.579 }, 00:20:11.579 "method": "bdev_nvme_attach_controller" 00:20:11.579 } 00:20:11.579 EOF 00:20:11.579 )") 00:20:11.579 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:20:11.579 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:20:11.579 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:20:11.579 23:57:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:11.579 "params": { 00:20:11.579 "name": "Nvme1", 00:20:11.579 "trtype": "tcp", 00:20:11.579 "traddr": "10.0.0.2", 00:20:11.579 "adrfam": "ipv4", 00:20:11.579 "trsvcid": "4420", 00:20:11.579 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.579 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.579 "hdgst": false, 00:20:11.579 "ddgst": false 00:20:11.579 }, 00:20:11.579 "method": "bdev_nvme_attach_controller" 00:20:11.579 },{ 00:20:11.579 "params": { 00:20:11.579 "name": "Nvme2", 00:20:11.579 "trtype": "tcp", 00:20:11.579 "traddr": "10.0.0.2", 00:20:11.579 "adrfam": "ipv4", 00:20:11.579 "trsvcid": "4420", 00:20:11.579 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:11.579 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:11.579 "hdgst": false, 00:20:11.579 "ddgst": false 00:20:11.579 }, 00:20:11.579 "method": "bdev_nvme_attach_controller" 00:20:11.579 },{ 00:20:11.579 "params": { 00:20:11.579 "name": "Nvme3", 00:20:11.579 "trtype": "tcp", 00:20:11.579 "traddr": "10.0.0.2", 00:20:11.579 "adrfam": "ipv4", 00:20:11.579 "trsvcid": "4420", 00:20:11.579 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:11.579 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:11.579 "hdgst": false, 00:20:11.579 "ddgst": false 00:20:11.579 }, 00:20:11.579 "method": "bdev_nvme_attach_controller" 00:20:11.579 },{ 00:20:11.579 "params": { 00:20:11.579 "name": "Nvme4", 00:20:11.579 "trtype": "tcp", 00:20:11.579 "traddr": "10.0.0.2", 00:20:11.579 "adrfam": "ipv4", 00:20:11.579 "trsvcid": "4420", 00:20:11.579 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:11.579 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:11.579 "hdgst": false, 00:20:11.579 "ddgst": false 00:20:11.579 }, 00:20:11.579 "method": "bdev_nvme_attach_controller" 00:20:11.579 },{ 00:20:11.579 "params": { 00:20:11.579 "name": "Nvme5", 00:20:11.579 "trtype": "tcp", 00:20:11.579 "traddr": "10.0.0.2", 00:20:11.579 "adrfam": "ipv4", 00:20:11.579 "trsvcid": "4420", 00:20:11.579 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:11.579 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:11.579 "hdgst": false, 00:20:11.579 "ddgst": false 00:20:11.579 }, 00:20:11.579 "method": "bdev_nvme_attach_controller" 00:20:11.579 },{ 00:20:11.579 "params": { 00:20:11.579 "name": "Nvme6", 00:20:11.579 "trtype": "tcp", 00:20:11.579 "traddr": "10.0.0.2", 00:20:11.579 "adrfam": "ipv4", 00:20:11.579 "trsvcid": "4420", 00:20:11.579 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:11.579 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:11.579 "hdgst": false, 00:20:11.579 "ddgst": false 00:20:11.579 }, 00:20:11.579 "method": "bdev_nvme_attach_controller" 00:20:11.579 },{ 00:20:11.579 "params": { 00:20:11.579 "name": "Nvme7", 00:20:11.579 "trtype": "tcp", 00:20:11.579 "traddr": "10.0.0.2", 00:20:11.579 "adrfam": "ipv4", 00:20:11.579 "trsvcid": "4420", 00:20:11.579 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:11.579 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:11.579 "hdgst": false, 00:20:11.579 "ddgst": false 00:20:11.579 }, 00:20:11.579 "method": "bdev_nvme_attach_controller" 00:20:11.579 },{ 00:20:11.579 "params": { 00:20:11.579 "name": "Nvme8", 00:20:11.579 "trtype": "tcp", 00:20:11.579 "traddr": "10.0.0.2", 00:20:11.579 "adrfam": "ipv4", 00:20:11.579 "trsvcid": "4420", 00:20:11.579 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:11.579 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:11.579 "hdgst": false, 00:20:11.579 "ddgst": false 00:20:11.579 }, 00:20:11.579 "method": "bdev_nvme_attach_controller" 00:20:11.579 },{ 00:20:11.579 "params": { 00:20:11.579 "name": "Nvme9", 00:20:11.579 "trtype": "tcp", 00:20:11.579 "traddr": "10.0.0.2", 00:20:11.579 "adrfam": "ipv4", 00:20:11.579 "trsvcid": "4420", 00:20:11.579 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:11.579 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:11.579 "hdgst": false, 00:20:11.579 "ddgst": false 00:20:11.579 }, 00:20:11.579 "method": "bdev_nvme_attach_controller" 00:20:11.579 },{ 00:20:11.579 "params": { 00:20:11.579 "name": "Nvme10", 00:20:11.579 "trtype": "tcp", 00:20:11.579 "traddr": "10.0.0.2", 00:20:11.579 "adrfam": "ipv4", 00:20:11.579 "trsvcid": "4420", 00:20:11.579 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:11.579 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:11.579 "hdgst": false, 00:20:11.579 "ddgst": false 00:20:11.579 }, 00:20:11.579 "method": "bdev_nvme_attach_controller" 00:20:11.579 }' 00:20:11.579 [2024-07-24 23:57:42.056715] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:20:11.579 [2024-07-24 23:57:42.056795] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3419863 ] 00:20:11.579 EAL: No free 2048 kB hugepages reported on node 1 00:20:11.579 [2024-07-24 23:57:42.121508] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:11.837 [2024-07-24 23:57:42.231934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:13.207 Running I/O for 10 seconds... 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.466 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:13.723 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.723 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:13.723 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:13.723 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 3419863 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3419863 ']' 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3419863 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3419863 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3419863' 00:20:13.981 killing process with pid 3419863 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3419863 00:20:13.981 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3419863 00:20:13.981 Received shutdown signal, test time was about 0.796921 seconds 00:20:13.981 00:20:13.981 Latency(us) 00:20:13.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:13.981 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.981 Verification LBA range: start 0x0 length 0x400 00:20:13.981 Nvme1n1 : 0.77 250.06 15.63 0.00 0.00 252398.81 18738.44 254765.13 00:20:13.981 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.981 Verification LBA range: start 0x0 length 0x400 00:20:13.981 Nvme2n1 : 0.79 243.20 15.20 0.00 0.00 253421.04 38641.97 231463.44 00:20:13.981 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.981 Verification LBA range: start 0x0 length 0x400 00:20:13.981 Nvme3n1 : 0.78 247.21 15.45 0.00 0.00 242973.20 19515.16 246997.90 00:20:13.981 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.981 Verification LBA range: start 0x0 length 0x400 00:20:13.981 Nvme4n1 : 0.77 250.95 15.68 0.00 0.00 232630.80 23010.42 251658.24 00:20:13.981 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.981 Verification LBA range: start 0x0 length 0x400 00:20:13.981 Nvme5n1 : 0.78 246.28 15.39 0.00 0.00 231364.58 19320.98 237677.23 00:20:13.981 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.981 Verification LBA range: start 0x0 length 0x400 00:20:13.981 Nvme6n1 : 0.78 244.64 15.29 0.00 0.00 226916.12 21262.79 254765.13 00:20:13.981 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.981 Verification LBA range: start 0x0 length 0x400 00:20:13.981 Nvme7n1 : 0.80 241.24 15.08 0.00 0.00 224259.29 24272.59 253211.69 00:20:13.981 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.981 Verification LBA range: start 0x0 length 0x400 00:20:13.981 Nvme8n1 : 0.79 241.93 15.12 0.00 0.00 217386.10 19709.35 250104.79 00:20:13.981 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.981 Verification LBA range: start 0x0 length 0x400 00:20:13.981 Nvme9n1 : 0.75 179.37 11.21 0.00 0.00 276880.91 4587.52 254765.13 00:20:13.981 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:13.981 Verification LBA range: start 0x0 length 0x400 00:20:13.981 Nvme10n1 : 0.76 168.92 10.56 0.00 0.00 290302.10 24758.04 281173.71 00:20:13.981 =================================================================================================================== 00:20:13.981 Total : 2313.82 144.61 0.00 0.00 242202.39 4587.52 281173.71 00:20:14.238 23:57:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 3419569 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.639 rmmod nvme_tcp 00:20:15.639 rmmod nvme_fabrics 00:20:15.639 rmmod nvme_keyring 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 3419569 ']' 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 3419569 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 3419569 ']' 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 3419569 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3419569 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3419569' 00:20:15.639 killing process with pid 3419569 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 3419569 00:20:15.639 23:57:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 3419569 00:20:15.898 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:15.898 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:15.898 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:15.898 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.898 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.898 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.898 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.898 23:57:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:18.427 00:20:18.427 real 0m8.297s 00:20:18.427 user 0m25.442s 00:20:18.427 sys 0m1.527s 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:18.427 ************************************ 00:20:18.427 END TEST nvmf_shutdown_tc2 00:20:18.427 ************************************ 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:18.427 ************************************ 00:20:18.427 START TEST nvmf_shutdown_tc3 00:20:18.427 ************************************ 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:18.427 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.427 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:18.428 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:18.428 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:18.428 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:18.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:18.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:20:18.428 00:20:18.428 --- 10.0.0.2 ping statistics --- 00:20:18.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.428 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:18.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:18.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:20:18.428 00:20:18.428 --- 10.0.0.1 ping statistics --- 00:20:18.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:18.428 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=3420765 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 3420765 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3420765 ']' 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:18.428 23:57:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:18.428 [2024-07-24 23:57:48.799446] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:20:18.428 [2024-07-24 23:57:48.799527] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:18.428 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.428 [2024-07-24 23:57:48.871414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:18.428 [2024-07-24 23:57:48.988215] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:18.428 [2024-07-24 23:57:48.988287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:18.428 [2024-07-24 23:57:48.988304] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:18.428 [2024-07-24 23:57:48.988318] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:18.428 [2024-07-24 23:57:48.988329] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:18.429 [2024-07-24 23:57:48.988436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.429 [2024-07-24 23:57:48.988486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:18.429 [2024-07-24 23:57:48.988547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:18.429 [2024-07-24 23:57:48.988550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.361 [2024-07-24 23:57:49.771543] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.361 23:57:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.361 Malloc1 00:20:19.361 [2024-07-24 23:57:49.860759] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:19.361 Malloc2 00:20:19.361 Malloc3 00:20:19.618 Malloc4 00:20:19.619 Malloc5 00:20:19.619 Malloc6 00:20:19.619 Malloc7 00:20:19.619 Malloc8 00:20:19.877 Malloc9 00:20:19.877 Malloc10 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=3420956 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 3420956 /var/tmp/bdevperf.sock 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 3420956 ']' 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.877 { 00:20:19.877 "params": { 00:20:19.877 "name": "Nvme$subsystem", 00:20:19.877 "trtype": "$TEST_TRANSPORT", 00:20:19.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.877 "adrfam": "ipv4", 00:20:19.877 "trsvcid": "$NVMF_PORT", 00:20:19.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.877 "hdgst": ${hdgst:-false}, 00:20:19.877 "ddgst": ${ddgst:-false} 00:20:19.877 }, 00:20:19.877 "method": "bdev_nvme_attach_controller" 00:20:19.877 } 00:20:19.877 EOF 00:20:19.877 )") 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.877 { 00:20:19.877 "params": { 00:20:19.877 "name": "Nvme$subsystem", 00:20:19.877 "trtype": "$TEST_TRANSPORT", 00:20:19.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.877 "adrfam": "ipv4", 00:20:19.877 "trsvcid": "$NVMF_PORT", 00:20:19.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.877 "hdgst": ${hdgst:-false}, 00:20:19.877 "ddgst": ${ddgst:-false} 00:20:19.877 }, 00:20:19.877 "method": "bdev_nvme_attach_controller" 00:20:19.877 } 00:20:19.877 EOF 00:20:19.877 )") 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.877 { 00:20:19.877 "params": { 00:20:19.877 "name": "Nvme$subsystem", 00:20:19.877 "trtype": "$TEST_TRANSPORT", 00:20:19.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.877 "adrfam": "ipv4", 00:20:19.877 "trsvcid": "$NVMF_PORT", 00:20:19.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.877 "hdgst": ${hdgst:-false}, 00:20:19.877 "ddgst": ${ddgst:-false} 00:20:19.877 }, 00:20:19.877 "method": "bdev_nvme_attach_controller" 00:20:19.877 } 00:20:19.877 EOF 00:20:19.877 )") 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.877 { 00:20:19.877 "params": { 00:20:19.877 "name": "Nvme$subsystem", 00:20:19.877 "trtype": "$TEST_TRANSPORT", 00:20:19.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.877 "adrfam": "ipv4", 00:20:19.877 "trsvcid": "$NVMF_PORT", 00:20:19.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.877 "hdgst": ${hdgst:-false}, 00:20:19.877 "ddgst": ${ddgst:-false} 00:20:19.877 }, 00:20:19.877 "method": "bdev_nvme_attach_controller" 00:20:19.877 } 00:20:19.877 EOF 00:20:19.877 )") 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.877 { 00:20:19.877 "params": { 00:20:19.877 "name": "Nvme$subsystem", 00:20:19.877 "trtype": "$TEST_TRANSPORT", 00:20:19.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.877 "adrfam": "ipv4", 00:20:19.877 "trsvcid": "$NVMF_PORT", 00:20:19.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.877 "hdgst": ${hdgst:-false}, 00:20:19.877 "ddgst": ${ddgst:-false} 00:20:19.877 }, 00:20:19.877 "method": "bdev_nvme_attach_controller" 00:20:19.877 } 00:20:19.877 EOF 00:20:19.877 )") 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.877 { 00:20:19.877 "params": { 00:20:19.877 "name": "Nvme$subsystem", 00:20:19.877 "trtype": "$TEST_TRANSPORT", 00:20:19.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.877 "adrfam": "ipv4", 00:20:19.877 "trsvcid": "$NVMF_PORT", 00:20:19.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.877 "hdgst": ${hdgst:-false}, 00:20:19.877 "ddgst": ${ddgst:-false} 00:20:19.877 }, 00:20:19.877 "method": "bdev_nvme_attach_controller" 00:20:19.877 } 00:20:19.877 EOF 00:20:19.877 )") 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.877 { 00:20:19.877 "params": { 00:20:19.877 "name": "Nvme$subsystem", 00:20:19.877 "trtype": "$TEST_TRANSPORT", 00:20:19.877 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.877 "adrfam": "ipv4", 00:20:19.877 "trsvcid": "$NVMF_PORT", 00:20:19.877 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.877 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.877 "hdgst": ${hdgst:-false}, 00:20:19.877 "ddgst": ${ddgst:-false} 00:20:19.877 }, 00:20:19.877 "method": "bdev_nvme_attach_controller" 00:20:19.877 } 00:20:19.877 EOF 00:20:19.877 )") 00:20:19.877 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:19.878 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.878 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.878 { 00:20:19.878 "params": { 00:20:19.878 "name": "Nvme$subsystem", 00:20:19.878 "trtype": "$TEST_TRANSPORT", 00:20:19.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.878 "adrfam": "ipv4", 00:20:19.878 "trsvcid": "$NVMF_PORT", 00:20:19.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.878 "hdgst": ${hdgst:-false}, 00:20:19.878 "ddgst": ${ddgst:-false} 00:20:19.878 }, 00:20:19.878 "method": "bdev_nvme_attach_controller" 00:20:19.878 } 00:20:19.878 EOF 00:20:19.878 )") 00:20:19.878 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:19.878 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.878 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.878 { 00:20:19.878 "params": { 00:20:19.878 "name": "Nvme$subsystem", 00:20:19.878 "trtype": "$TEST_TRANSPORT", 00:20:19.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.878 "adrfam": "ipv4", 00:20:19.878 "trsvcid": "$NVMF_PORT", 00:20:19.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.878 "hdgst": ${hdgst:-false}, 00:20:19.878 "ddgst": ${ddgst:-false} 00:20:19.878 }, 00:20:19.878 "method": "bdev_nvme_attach_controller" 00:20:19.878 } 00:20:19.878 EOF 00:20:19.878 )") 00:20:19.878 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:19.878 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:19.878 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:19.878 { 00:20:19.878 "params": { 00:20:19.878 "name": "Nvme$subsystem", 00:20:19.878 "trtype": "$TEST_TRANSPORT", 00:20:19.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:19.878 "adrfam": "ipv4", 00:20:19.878 "trsvcid": "$NVMF_PORT", 00:20:19.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:19.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:19.878 "hdgst": ${hdgst:-false}, 00:20:19.878 "ddgst": ${ddgst:-false} 00:20:19.878 }, 00:20:19.878 "method": "bdev_nvme_attach_controller" 00:20:19.878 } 00:20:19.878 EOF 00:20:19.878 )") 00:20:19.878 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:20:19.878 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:20:19.878 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:20:19.878 23:57:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:19.878 "params": { 00:20:19.878 "name": "Nvme1", 00:20:19.878 "trtype": "tcp", 00:20:19.878 "traddr": "10.0.0.2", 00:20:19.878 "adrfam": "ipv4", 00:20:19.878 "trsvcid": "4420", 00:20:19.878 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.878 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:19.878 "hdgst": false, 00:20:19.878 "ddgst": false 00:20:19.878 }, 00:20:19.878 "method": "bdev_nvme_attach_controller" 00:20:19.878 },{ 00:20:19.878 "params": { 00:20:19.878 "name": "Nvme2", 00:20:19.878 "trtype": "tcp", 00:20:19.878 "traddr": "10.0.0.2", 00:20:19.878 "adrfam": "ipv4", 00:20:19.878 "trsvcid": "4420", 00:20:19.878 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:19.878 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:19.878 "hdgst": false, 00:20:19.878 "ddgst": false 00:20:19.878 }, 00:20:19.878 "method": "bdev_nvme_attach_controller" 00:20:19.878 },{ 00:20:19.878 "params": { 00:20:19.878 "name": "Nvme3", 00:20:19.878 "trtype": "tcp", 00:20:19.878 "traddr": "10.0.0.2", 00:20:19.878 "adrfam": "ipv4", 00:20:19.878 "trsvcid": "4420", 00:20:19.878 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:19.878 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:19.878 "hdgst": false, 00:20:19.878 "ddgst": false 00:20:19.878 }, 00:20:19.878 "method": "bdev_nvme_attach_controller" 00:20:19.878 },{ 00:20:19.878 "params": { 00:20:19.878 "name": "Nvme4", 00:20:19.878 "trtype": "tcp", 00:20:19.878 "traddr": "10.0.0.2", 00:20:19.878 "adrfam": "ipv4", 00:20:19.878 "trsvcid": "4420", 00:20:19.878 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:19.878 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:19.878 "hdgst": false, 00:20:19.878 "ddgst": false 00:20:19.878 }, 00:20:19.878 "method": "bdev_nvme_attach_controller" 00:20:19.878 },{ 00:20:19.878 "params": { 00:20:19.878 "name": "Nvme5", 00:20:19.878 "trtype": "tcp", 00:20:19.878 "traddr": "10.0.0.2", 00:20:19.878 "adrfam": "ipv4", 00:20:19.878 "trsvcid": "4420", 00:20:19.878 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:19.878 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:19.878 "hdgst": false, 00:20:19.878 "ddgst": false 00:20:19.878 }, 00:20:19.878 "method": "bdev_nvme_attach_controller" 00:20:19.878 },{ 00:20:19.878 "params": { 00:20:19.878 "name": "Nvme6", 00:20:19.878 "trtype": "tcp", 00:20:19.878 "traddr": "10.0.0.2", 00:20:19.878 "adrfam": "ipv4", 00:20:19.878 "trsvcid": "4420", 00:20:19.878 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:19.878 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:19.878 "hdgst": false, 00:20:19.878 "ddgst": false 00:20:19.878 }, 00:20:19.878 "method": "bdev_nvme_attach_controller" 00:20:19.878 },{ 00:20:19.878 "params": { 00:20:19.878 "name": "Nvme7", 00:20:19.878 "trtype": "tcp", 00:20:19.878 "traddr": "10.0.0.2", 00:20:19.878 "adrfam": "ipv4", 00:20:19.878 "trsvcid": "4420", 00:20:19.878 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:19.878 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:19.878 "hdgst": false, 00:20:19.878 "ddgst": false 00:20:19.878 }, 00:20:19.878 "method": "bdev_nvme_attach_controller" 00:20:19.878 },{ 00:20:19.878 "params": { 00:20:19.878 "name": "Nvme8", 00:20:19.878 "trtype": "tcp", 00:20:19.878 "traddr": "10.0.0.2", 00:20:19.878 "adrfam": "ipv4", 00:20:19.878 "trsvcid": "4420", 00:20:19.878 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:19.878 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:19.878 "hdgst": false, 00:20:19.878 "ddgst": false 00:20:19.878 }, 00:20:19.878 "method": "bdev_nvme_attach_controller" 00:20:19.878 },{ 00:20:19.878 "params": { 00:20:19.878 "name": "Nvme9", 00:20:19.878 "trtype": "tcp", 00:20:19.878 "traddr": "10.0.0.2", 00:20:19.878 "adrfam": "ipv4", 00:20:19.878 "trsvcid": "4420", 00:20:19.878 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:19.878 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:19.878 "hdgst": false, 00:20:19.878 "ddgst": false 00:20:19.878 }, 00:20:19.878 "method": "bdev_nvme_attach_controller" 00:20:19.878 },{ 00:20:19.878 "params": { 00:20:19.878 "name": "Nvme10", 00:20:19.878 "trtype": "tcp", 00:20:19.878 "traddr": "10.0.0.2", 00:20:19.878 "adrfam": "ipv4", 00:20:19.878 "trsvcid": "4420", 00:20:19.878 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:19.878 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:19.878 "hdgst": false, 00:20:19.878 "ddgst": false 00:20:19.878 }, 00:20:19.878 "method": "bdev_nvme_attach_controller" 00:20:19.878 }' 00:20:19.878 [2024-07-24 23:57:50.388271] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:20:19.878 [2024-07-24 23:57:50.388357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3420956 ] 00:20:19.878 EAL: No free 2048 kB hugepages reported on node 1 00:20:19.878 [2024-07-24 23:57:50.452826] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.136 [2024-07-24 23:57:50.563267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.083 Running I/O for 10 seconds... 00:20:22.083 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:22.083 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:20:22.083 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:22.083 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.083 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:22.083 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.084 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:22.341 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.341 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:20:22.341 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:20:22.341 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:20:22.341 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:20:22.341 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:22.341 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:22.341 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:22.341 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.341 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:22.615 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.615 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:22.615 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:22.615 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:22.615 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:22.615 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:22.615 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 3420765 00:20:22.615 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 3420765 ']' 00:20:22.615 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 3420765 00:20:22.615 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:20:22.615 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.615 23:57:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3420765 00:20:22.615 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:22.615 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:22.615 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3420765' 00:20:22.615 killing process with pid 3420765 00:20:22.615 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 3420765 00:20:22.615 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 3420765 00:20:22.615 [2024-07-24 23:57:53.013574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.615 [2024-07-24 23:57:53.013654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.013674] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.615 [2024-07-24 23:57:53.013688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.013703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.615 [2024-07-24 23:57:53.013735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.013750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.615 [2024-07-24 23:57:53.013764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.013777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9830 is same with the state(5) to be set 00:20:22.615 [2024-07-24 23:57:53.014310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with [2024-07-24 23:57:53.014707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:22.615 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.615 [2024-07-24 23:57:53.014739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.615 [2024-07-24 23:57:53.014754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.615 [2024-07-24 23:57:53.014769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.615 [2024-07-24 23:57:53.014773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.615 [2024-07-24 23:57:53.014785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with [2024-07-24 23:57:53.014784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:1the state(5) to be set 00:20:22.615 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.615 [2024-07-24 23:57:53.014799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.014801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.014810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.014816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.014823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.014830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.014846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.014860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.014875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:1[2024-07-24 23:57:53.014835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.014894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.014897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.014911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with [2024-07-24 23:57:53.014910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:1the state(5) to be set 00:20:22.616 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.014925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.014927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.014937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.014943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.014949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.014957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.014961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.014972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:1[2024-07-24 23:57:53.014973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.014987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:57:53.014988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015003] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.015015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.015034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.015047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.015062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.015064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.015083] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.015096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:57:53.015109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.015135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.015148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.015159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.015172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with [2024-07-24 23:57:53.015185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1the state(5) to be set 00:20:22.616 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.015199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.015218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.015232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.015254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.015255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.015275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:1[2024-07-24 23:57:53.015288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with [2024-07-24 23:57:53.015313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:22.616 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.015332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.015346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.015359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.015371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.015384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with [2024-07-24 23:57:53.015395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:1the state(5) to be set 00:20:22.616 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.015409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.015422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.616 [2024-07-24 23:57:53.015434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.616 [2024-07-24 23:57:53.015442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.616 [2024-07-24 23:57:53.015446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.617 [2024-07-24 23:57:53.015458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with [2024-07-24 23:57:53.015458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:1the state(5) to be set 00:20:22.617 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.617 [2024-07-24 23:57:53.015473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015483] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.617 [2024-07-24 23:57:53.015489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.617 [2024-07-24 23:57:53.015503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:57:53.015507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 the state(5) to be set 00:20:22.617 [2024-07-24 23:57:53.015522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.617 [2024-07-24 23:57:53.015525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.617 [2024-07-24 23:57:53.015539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.617 [2024-07-24 23:57:53.015555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2409420 is same with the state(5) to be set 00:20:22.617 [2024-07-24 23:57:53.015570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.015979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.015994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.016008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.016022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.016036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.016051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.016064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.016079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.016092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.016107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.016121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.016136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.016149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.016164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.016177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.016198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.016212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.016228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.016247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.016264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.617 [2024-07-24 23:57:53.016278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.617 [2024-07-24 23:57:53.016324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:22.617 [2024-07-24 23:57:53.016394] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x262e920 was disconnected and freed. reset controller. 00:20:22.617 [2024-07-24 23:57:53.016946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.617 [2024-07-24 23:57:53.016972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.617 [2024-07-24 23:57:53.016985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.617 [2024-07-24 23:57:53.016997] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.617 [2024-07-24 23:57:53.017009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.617 [2024-07-24 23:57:53.017021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017105] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017620] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017711] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.017748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2406de0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.020891] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.618 [2024-07-24 23:57:53.020933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a9830 (9): Bad file descriptor 00:20:22.618 [2024-07-24 23:57:53.021347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.021400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.021417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.021430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.021443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.021480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.021495] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.021507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.021731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.021909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.021949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.021973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.022002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.022015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.022028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.022041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.022053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.022064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.022076] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.022088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.618 [2024-07-24 23:57:53.022100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022507] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022519] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.619 [2024-07-24 23:57:53.022531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24a9830 with addr=10.0.0.2, port=4420 00:20:22.619 [2024-07-24 23:57:53.022943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with [2024-07-24 23:57:53.022957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9830 is same the state(5) to be set 00:20:22.619 with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.022995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023197] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023211] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24072a0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a9830 (9): Bad file descriptor 00:20:22.619 [2024-07-24 23:57:53.023755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.619 [2024-07-24 23:57:53.023776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.619 [2024-07-24 23:57:53.023792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.619 [2024-07-24 23:57:53.023806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.619 [2024-07-24 23:57:53.023820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.619 [2024-07-24 23:57:53.023833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.619 [2024-07-24 23:57:53.023846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.619 [2024-07-24 23:57:53.023859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.619 [2024-07-24 23:57:53.023872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cdb50 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.023932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.619 [2024-07-24 23:57:53.023953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.619 [2024-07-24 23:57:53.023967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.619 [2024-07-24 23:57:53.023980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.619 [2024-07-24 23:57:53.023994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.619 [2024-07-24 23:57:53.024007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.619 [2024-07-24 23:57:53.024021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.619 [2024-07-24 23:57:53.024034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.619 [2024-07-24 23:57:53.024047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2661cd0 is same with the state(5) to be set 00:20:22.619 [2024-07-24 23:57:53.024142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.619 [2024-07-24 23:57:53.024165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.619 [2024-07-24 23:57:53.024190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.620 [2024-07-24 23:57:53.024219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.024250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.620 [2024-07-24 23:57:53.024267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.024281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.620 [2024-07-24 23:57:53.024294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.024307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3c80 is same with the state(5) to be set 00:20:22.620 [2024-07-24 23:57:53.024393] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:22.620 [2024-07-24 23:57:53.025193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.025982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.025997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.026010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.026026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.026040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.026056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.026069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.026085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.026099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.026114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.026128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.026144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.026157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.026173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.026187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.026204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.026217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.026233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.620 [2024-07-24 23:57:53.026259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.620 [2024-07-24 23:57:53.026277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.026983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.026998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.027015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.027031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.027044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.027060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.027074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.027090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.027103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.027119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.621 [2024-07-24 23:57:53.027132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.621 [2024-07-24 23:57:53.027147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2548fe0 is same with the state(5) to be set 00:20:22.621 [2024-07-24 23:57:53.027713] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2548fe0 was disconnected and freed. reset controller. 00:20:22.621 [2024-07-24 23:57:53.027756] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.621 [2024-07-24 23:57:53.027774] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.621 [2024-07-24 23:57:53.027790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.621 [2024-07-24 23:57:53.029795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.621 [2024-07-24 23:57:53.029825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:22.621 [2024-07-24 23:57:53.029853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2661cd0 (9): Bad file descriptor 00:20:22.621 [2024-07-24 23:57:53.031399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-07-24 23:57:53.031432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2661cd0 with addr=10.0.0.2, port=4420 00:20:22.621 [2024-07-24 23:57:53.031450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2661cd0 is same with the state(5) to be set 00:20:22.621 [2024-07-24 23:57:53.031877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2661cd0 (9): Bad file descriptor 00:20:22.621 [2024-07-24 23:57:53.031969] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:22.621 [2024-07-24 23:57:53.032252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:22.621 [2024-07-24 23:57:53.032275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:22.621 [2024-07-24 23:57:53.032290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:22.621 [2024-07-24 23:57:53.032544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.621 [2024-07-24 23:57:53.032576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.621 [2024-07-24 23:57:53.032949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.621 [2024-07-24 23:57:53.032985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24a9830 with addr=10.0.0.2, port=4420 00:20:22.621 [2024-07-24 23:57:53.033003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9830 is same with the state(5) to be set 00:20:22.621 [2024-07-24 23:57:53.033265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a9830 (9): Bad file descriptor 00:20:22.621 [2024-07-24 23:57:53.033513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.621 [2024-07-24 23:57:53.033535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.622 [2024-07-24 23:57:53.033550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.622 [2024-07-24 23:57:53.033797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.622 [2024-07-24 23:57:53.033827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cdb50 (9): Bad file descriptor 00:20:22.622 [2024-07-24 23:57:53.033925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.622 [2024-07-24 23:57:53.033956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.033975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.622 [2024-07-24 23:57:53.033989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.034002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.622 [2024-07-24 23:57:53.034015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.034029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.622 [2024-07-24 23:57:53.034042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.034054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc400 is same with the state(5) to be set 00:20:22.622 [2024-07-24 23:57:53.034095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d3c80 (9): Bad file descriptor 00:20:22.622 [2024-07-24 23:57:53.036695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.036721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.036745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.036761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.036777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.036791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.036807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.036821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.036837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.036856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.036872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.036886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.036901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.036915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.036931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.036944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.036959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.036973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.036988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.622 [2024-07-24 23:57:53.037693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.622 [2024-07-24 23:57:53.037706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.037722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.037736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.037751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.037765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.037780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.037794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.037809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.037823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.037838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.037852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.037867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.037881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.037896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.037910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.037925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.037939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.037954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.037968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.037987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.623 [2024-07-24 23:57:53.038644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.623 [2024-07-24 23:57:53.038659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2550ba0 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.038737] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2550ba0 was disconnected and freed. reset controller. 00:20:22.623 [2024-07-24 23:57:53.038863] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2624d40 was disconnected and freed. reset controller. 00:20:22.623 [2024-07-24 23:57:53.040088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.040123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.040138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.040161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.040174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.040186] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.040198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.040210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.040222] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.040234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.040254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.040268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.040290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.040301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.623 [2024-07-24 23:57:53.040313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040336] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040348] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040453] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040465] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040594] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040629] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040738] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:22.624 [2024-07-24 23:57:53.040767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:22.624 [2024-07-24 23:57:53.040819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257e7a0 (9): Bad file descriptor 00:20:22.624 [2024-07-24 23:57:53.040845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cc400 (9): [2024-07-24 23:57:53.040653] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with Bad file descriptor 00:20:22.624 the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040896] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040921] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040944] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040967] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040979] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.040990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.041006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.041018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.041030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.041041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.041053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.041187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.041203] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.041215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.041227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408100 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.041689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:22.624 [2024-07-24 23:57:53.041836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.624 [2024-07-24 23:57:53.041863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cc400 with addr=10.0.0.2, port=4420 00:20:22.624 [2024-07-24 23:57:53.041880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc400 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.042025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.624 [2024-07-24 23:57:53.042050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x257e7a0 with addr=10.0.0.2, port=4420 00:20:22.624 [2024-07-24 23:57:53.042065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257e7a0 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.042429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.624 [2024-07-24 23:57:53.042456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2661cd0 with addr=10.0.0.2, port=4420 00:20:22.624 [2024-07-24 23:57:53.042472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2661cd0 is same with the state(5) to be set 00:20:22.624 [2024-07-24 23:57:53.042491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cc400 (9): Bad file descriptor 00:20:22.624 [2024-07-24 23:57:53.042510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257e7a0 (9): Bad file descriptor 00:20:22.624 [2024-07-24 23:57:53.042784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2661cd0 (9): Bad file descriptor 00:20:22.624 [2024-07-24 23:57:53.042809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:22.624 [2024-07-24 23:57:53.042822] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:22.624 [2024-07-24 23:57:53.042835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:22.624 [2024-07-24 23:57:53.042855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:22.624 [2024-07-24 23:57:53.042870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:22.624 [2024-07-24 23:57:53.042883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:22.624 [2024-07-24 23:57:53.043138] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.624 [2024-07-24 23:57:53.043164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.624 [2024-07-24 23:57:53.043183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:22.624 [2024-07-24 23:57:53.043197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:22.625 [2024-07-24 23:57:53.043210] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:22.625 [2024-07-24 23:57:53.043468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.625 [2024-07-24 23:57:53.043495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.625 [2024-07-24 23:57:53.043575] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:22.625 [2024-07-24 23:57:53.043877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.625 [2024-07-24 23:57:53.043905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24a9830 with addr=10.0.0.2, port=4420 00:20:22.625 [2024-07-24 23:57:53.043921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9830 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a9830 (9): Bad file descriptor 00:20:22.625 [2024-07-24 23:57:53.044197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.625 [2024-07-24 23:57:53.044220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.625 [2024-07-24 23:57:53.044235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.625 [2024-07-24 23:57:53.044258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.625 [2024-07-24 23:57:53.044273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.625 [2024-07-24 23:57:53.044286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.625 [2024-07-24 23:57:53.044300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.625 [2024-07-24 23:57:53.044313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.625 [2024-07-24 23:57:53.044326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8910 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-24 23:57:53.044377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with id:0 cdw10:00000000 cdw11:00000000 00:20:22.625 the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.625 [2024-07-24 23:57:53.044402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with [2024-07-24 23:57:53.044407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:20:22.625 id:0 cdw10:00000000 cdw11:00000000 00:20:22.625 [2024-07-24 23:57:53.044422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with [2024-07-24 23:57:53.044423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:20:22.625 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.625 [2024-07-24 23:57:53.044437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.625 [2024-07-24 23:57:53.044449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.625 [2024-07-24 23:57:53.044461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.625 [2024-07-24 23:57:53.044473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.625 [2024-07-24 23:57:53.044486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fab610 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044545] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.625 [2024-07-24 23:57:53.044851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.625 [2024-07-24 23:57:53.044850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.625 [2024-07-24 23:57:53.044867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044880] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044893] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044904] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044916] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128[2024-07-24 23:57:53.044927] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.625 the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with [2024-07-24 23:57:53.044943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:22.625 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.625 [2024-07-24 23:57:53.044956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128[2024-07-24 23:57:53.044968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.625 the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with [2024-07-24 23:57:53.044981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:22.625 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.625 [2024-07-24 23:57:53.044994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.044999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.625 [2024-07-24 23:57:53.045006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.045013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.625 [2024-07-24 23:57:53.045018] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.045029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with [2024-07-24 23:57:53.045029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128the state(5) to be set 00:20:22.625 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.625 [2024-07-24 23:57:53.045043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.045045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.625 [2024-07-24 23:57:53.045055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.625 [2024-07-24 23:57:53.045067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with [2024-07-24 23:57:53.045067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128the state(5) to be set 00:20:22.625 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.625 [2024-07-24 23:57:53.045082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045093] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045118] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128[2024-07-24 23:57:53.045130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:57:53.045145] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128[2024-07-24 23:57:53.045219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:57:53.045234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045318] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045330] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24085c0 is same with the state(5) to be set 00:20:22.626 [2024-07-24 23:57:53.045355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.626 [2024-07-24 23:57:53.045949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.626 [2024-07-24 23:57:53.045964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.045977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.045993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:12[2024-07-24 23:57:53.046417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:57:53.046434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with [2024-07-24 23:57:53.046509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:12the state(5) to be set 00:20:22.627 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:12[2024-07-24 23:57:53.046612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with [2024-07-24 23:57:53.046626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:22.627 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046640] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046665] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046677] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:57:53.046689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with [2024-07-24 23:57:53.046704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:12the state(5) to be set 00:20:22.627 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with [2024-07-24 23:57:53.046723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:22.627 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.627 [2024-07-24 23:57:53.046762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.627 [2024-07-24 23:57:53.046771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.627 [2024-07-24 23:57:53.046775] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:57:53.046787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.628 the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046801] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.628 [2024-07-24 23:57:53.046812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.628 [2024-07-24 23:57:53.046826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.628 [2024-07-24 23:57:53.046838] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.628 [2024-07-24 23:57:53.046850] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.628 [2024-07-24 23:57:53.046874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.628 [2024-07-24 23:57:53.046886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x262fcb0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046960] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.046994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047006] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408aa0 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.047995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048031] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048065] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048088] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048101] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048112] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.628 [2024-07-24 23:57:53.048148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:57:53.048161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.628 the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.628 [2024-07-24 23:57:53.048188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:57:53.048200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.628 the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.628 [2024-07-24 23:57:53.048226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.628 [2024-07-24 23:57:53.048235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.628 [2024-07-24 23:57:53.048260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128[2024-07-24 23:57:53.048273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048287] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with [2024-07-24 23:57:53.048287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:22.629 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128[2024-07-24 23:57:53.048337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with [2024-07-24 23:57:53.048352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:22.629 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:57:53.048415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048431] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128[2024-07-24 23:57:53.048468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with [2024-07-24 23:57:53.048481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:22.629 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:12[2024-07-24 23:57:53.048530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with [2024-07-24 23:57:53.048544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:20:22.629 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048598] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048627] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048664] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-24 23:57:53.048676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2408f60 is same with the state(5) to be set 00:20:22.629 [2024-07-24 23:57:53.048751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.629 [2024-07-24 23:57:53.048882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.629 [2024-07-24 23:57:53.048898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.048911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.048926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.048939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.048954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.048968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.048983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.048996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.049982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.049997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.630 [2024-07-24 23:57:53.050010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.630 [2024-07-24 23:57:53.050025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.631 [2024-07-24 23:57:53.050038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.631 [2024-07-24 23:57:53.050053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.631 [2024-07-24 23:57:53.050067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.631 [2024-07-24 23:57:53.050080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2631160 is same with the state(5) to be set 00:20:22.631 [2024-07-24 23:57:53.051457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.631 [2024-07-24 23:57:53.051481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:22.631 [2024-07-24 23:57:53.051500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:22.631 [2024-07-24 23:57:53.051882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.631 [2024-07-24 23:57:53.051910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cdb50 with addr=10.0.0.2, port=4420 00:20:22.631 [2024-07-24 23:57:53.051927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cdb50 is same with the state(5) to be set 00:20:22.631 [2024-07-24 23:57:53.052039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.631 [2024-07-24 23:57:53.052064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d3c80 with addr=10.0.0.2, port=4420 00:20:22.631 [2024-07-24 23:57:53.052079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3c80 is same with the state(5) to be set 00:20:22.631 [2024-07-24 23:57:53.052822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:22.631 [2024-07-24 23:57:53.052849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:22.631 [2024-07-24 23:57:53.052891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cdb50 (9): Bad file descriptor 00:20:22.631 [2024-07-24 23:57:53.052914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d3c80 (9): Bad file descriptor 00:20:22.631 [2024-07-24 23:57:53.053125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:22.631 [2024-07-24 23:57:53.053270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.631 [2024-07-24 23:57:53.053297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x257e7a0 with addr=10.0.0.2, port=4420 00:20:22.631 [2024-07-24 23:57:53.053313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257e7a0 is same with the state(5) to be set 00:20:22.631 [2024-07-24 23:57:53.053419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.631 [2024-07-24 23:57:53.053449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cc400 with addr=10.0.0.2, port=4420 00:20:22.631 [2024-07-24 23:57:53.053466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc400 is same with the state(5) to be set 00:20:22.631 [2024-07-24 23:57:53.053480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:22.631 [2024-07-24 23:57:53.053492] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:22.631 [2024-07-24 23:57:53.053505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:22.631 [2024-07-24 23:57:53.053524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:22.631 [2024-07-24 23:57:53.053538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:22.631 [2024-07-24 23:57:53.053551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:22.631 [2024-07-24 23:57:53.053638] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:22.631 [2024-07-24 23:57:53.053704] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:22.631 [2024-07-24 23:57:53.053757] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:22.631 [2024-07-24 23:57:53.053782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.631 [2024-07-24 23:57:53.053797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.631 [2024-07-24 23:57:53.053898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.631 [2024-07-24 23:57:53.053924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2661cd0 with addr=10.0.0.2, port=4420 00:20:22.631 [2024-07-24 23:57:53.053939] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2661cd0 is same with the state(5) to be set 00:20:22.631 [2024-07-24 23:57:53.053958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257e7a0 (9): Bad file descriptor 00:20:22.631 [2024-07-24 23:57:53.053977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cc400 (9): Bad file descriptor 00:20:22.631 [2024-07-24 23:57:53.054064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2661cd0 (9): Bad file descriptor 00:20:22.631 [2024-07-24 23:57:53.054088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:22.631 [2024-07-24 23:57:53.054101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:22.631 [2024-07-24 23:57:53.054113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:22.631 [2024-07-24 23:57:53.054131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:22.631 [2024-07-24 23:57:53.054145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:22.631 [2024-07-24 23:57:53.054158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:22.631 [2024-07-24 23:57:53.054202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.631 [2024-07-24 23:57:53.054223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.631 [2024-07-24 23:57:53.054235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.631 [2024-07-24 23:57:53.054265] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:22.631 [2024-07-24 23:57:53.054281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:22.631 [2024-07-24 23:57:53.054299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:22.631 [2024-07-24 23:57:53.054347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.631 [2024-07-24 23:57:53.054367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.631 [2024-07-24 23:57:53.054382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.631 [2024-07-24 23:57:53.054395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.631 [2024-07-24 23:57:53.054410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.631 [2024-07-24 23:57:53.054423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.631 [2024-07-24 23:57:53.054437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.631 [2024-07-24 23:57:53.054450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.631 [2024-07-24 23:57:53.054463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9280 is same with the state(5) to be set 00:20:22.631 [2024-07-24 23:57:53.054492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d8910 (9): Bad file descriptor 00:20:22.631 [2024-07-24 23:57:53.054523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fab610 (9): Bad file descriptor 00:20:22.631 [2024-07-24 23:57:53.054572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.631 [2024-07-24 23:57:53.054592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.631 [2024-07-24 23:57:53.054606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.631 [2024-07-24 23:57:53.054620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.631 [2024-07-24 23:57:53.054633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.631 [2024-07-24 23:57:53.054646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.631 [2024-07-24 23:57:53.054659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:22.631 [2024-07-24 23:57:53.054672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.631 [2024-07-24 23:57:53.054686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2670570 is same with the state(5) to be set 00:20:22.631 [2024-07-24 23:57:53.054749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.631 [2024-07-24 23:57:53.054871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.631 [2024-07-24 23:57:53.054897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24a9830 with addr=10.0.0.2, port=4420 00:20:22.631 [2024-07-24 23:57:53.054913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9830 is same with the state(5) to be set 00:20:22.631 [2024-07-24 23:57:53.054958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a9830 (9): Bad file descriptor 00:20:22.631 [2024-07-24 23:57:53.055003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.631 [2024-07-24 23:57:53.055024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.631 [2024-07-24 23:57:53.055038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.631 [2024-07-24 23:57:53.055083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.631 [2024-07-24 23:57:53.061651] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:22.631 [2024-07-24 23:57:53.061709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:22.631 [2024-07-24 23:57:53.061966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.631 [2024-07-24 23:57:53.062002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d3c80 with addr=10.0.0.2, port=4420 00:20:22.631 [2024-07-24 23:57:53.062021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3c80 is same with the state(5) to be set 00:20:22.631 [2024-07-24 23:57:53.062170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.631 [2024-07-24 23:57:53.062195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cdb50 with addr=10.0.0.2, port=4420 00:20:22.632 [2024-07-24 23:57:53.062212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cdb50 is same with the state(5) to be set 00:20:22.632 [2024-07-24 23:57:53.062268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d3c80 (9): Bad file descriptor 00:20:22.632 [2024-07-24 23:57:53.062293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cdb50 (9): Bad file descriptor 00:20:22.632 [2024-07-24 23:57:53.062341] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:22.632 [2024-07-24 23:57:53.062359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:22.632 [2024-07-24 23:57:53.062375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:22.632 [2024-07-24 23:57:53.062395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:22.632 [2024-07-24 23:57:53.062410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:22.632 [2024-07-24 23:57:53.062423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:22.632 [2024-07-24 23:57:53.062467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.632 [2024-07-24 23:57:53.062485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.632 [2024-07-24 23:57:53.062986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:22.632 [2024-07-24 23:57:53.063009] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:22.632 [2024-07-24 23:57:53.063173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.632 [2024-07-24 23:57:53.063200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cc400 with addr=10.0.0.2, port=4420 00:20:22.632 [2024-07-24 23:57:53.063216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc400 is same with the state(5) to be set 00:20:22.632 [2024-07-24 23:57:53.063334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.632 [2024-07-24 23:57:53.063360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x257e7a0 with addr=10.0.0.2, port=4420 00:20:22.632 [2024-07-24 23:57:53.063376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257e7a0 is same with the state(5) to be set 00:20:22.632 [2024-07-24 23:57:53.063422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cc400 (9): Bad file descriptor 00:20:22.632 [2024-07-24 23:57:53.063445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257e7a0 (9): Bad file descriptor 00:20:22.632 [2024-07-24 23:57:53.063516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:22.632 [2024-07-24 23:57:53.063536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:22.632 [2024-07-24 23:57:53.063550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:22.632 [2024-07-24 23:57:53.063568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:22.632 [2024-07-24 23:57:53.063581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:22.632 [2024-07-24 23:57:53.063594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:22.632 [2024-07-24 23:57:53.063637] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:22.632 [2024-07-24 23:57:53.063658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.632 [2024-07-24 23:57:53.063671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.632 [2024-07-24 23:57:53.063839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.632 [2024-07-24 23:57:53.063867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2661cd0 with addr=10.0.0.2, port=4420 00:20:22.632 [2024-07-24 23:57:53.063883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2661cd0 is same with the state(5) to be set 00:20:22.632 [2024-07-24 23:57:53.063929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2661cd0 (9): Bad file descriptor 00:20:22.632 [2024-07-24 23:57:53.063976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:22.632 [2024-07-24 23:57:53.063992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:22.632 [2024-07-24 23:57:53.064005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:22.632 [2024-07-24 23:57:53.064049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.632 [2024-07-24 23:57:53.064258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d9280 (9): Bad file descriptor 00:20:22.632 [2024-07-24 23:57:53.064314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2670570 (9): Bad file descriptor 00:20:22.632 [2024-07-24 23:57:53.064457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.064973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.064993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.065007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.065022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.065036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.065052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.065065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.065081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.065094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.632 [2024-07-24 23:57:53.065110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.632 [2024-07-24 23:57:53.065123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.065979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.065992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.066008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.066021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.066037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.066050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.066066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.066079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.066095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.066112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.633 [2024-07-24 23:57:53.066128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.633 [2024-07-24 23:57:53.066142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.066158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.066173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.066188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.066202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.066217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.066231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.066252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.066268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.066283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.066296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.066312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.066326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.066341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.066355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.066371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.066385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.066399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2551ae0 is same with the state(5) to be set 00:20:22.634 [2024-07-24 23:57:53.067685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.067708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.067729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.067744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.067761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.067780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.067797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.067811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.067827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.067841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.067857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.067870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.067886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.067901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.067916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.067930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.067945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.067959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.067975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.067988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.634 [2024-07-24 23:57:53.068593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.634 [2024-07-24 23:57:53.068608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.068622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.068638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.068652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.068668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.068682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.068697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.068711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.068726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.068740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.068756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.068769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.068786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.068800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.068815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.068829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.068844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.068858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.068874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.068888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.068904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.068918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.068933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.068950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.068966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.068980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.068996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.635 [2024-07-24 23:57:53.069639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.635 [2024-07-24 23:57:53.069655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a5050 is same with the state(5) to be set 00:20:22.635 [2024-07-24 23:57:53.070925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:22.635 [2024-07-24 23:57:53.070953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:22.635 [2024-07-24 23:57:53.071084] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.635 [2024-07-24 23:57:53.071317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.635 [2024-07-24 23:57:53.071345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d8910 with addr=10.0.0.2, port=4420 00:20:22.635 [2024-07-24 23:57:53.071361] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8910 is same with the state(5) to be set 00:20:22.635 [2024-07-24 23:57:53.071513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.635 [2024-07-24 23:57:53.071537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab610 with addr=10.0.0.2, port=4420 00:20:22.635 [2024-07-24 23:57:53.071553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fab610 is same with the state(5) to be set 00:20:22.635 [2024-07-24 23:57:53.072225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.636 [2024-07-24 23:57:53.072258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24a9830 with addr=10.0.0.2, port=4420 00:20:22.636 [2024-07-24 23:57:53.072275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9830 is same with the state(5) to be set 00:20:22.636 [2024-07-24 23:57:53.072298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d8910 (9): Bad file descriptor 00:20:22.636 [2024-07-24 23:57:53.072317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fab610 (9): Bad file descriptor 00:20:22.636 [2024-07-24 23:57:53.072407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a9830 (9): Bad file descriptor 00:20:22.636 [2024-07-24 23:57:53.072430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:22.636 [2024-07-24 23:57:53.072445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:22.636 [2024-07-24 23:57:53.072459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:22.636 [2024-07-24 23:57:53.072478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:22.636 [2024-07-24 23:57:53.072491] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:22.636 [2024-07-24 23:57:53.072505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:22.636 [2024-07-24 23:57:53.072561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:22.636 [2024-07-24 23:57:53.072584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:22.636 [2024-07-24 23:57:53.072601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.636 [2024-07-24 23:57:53.072614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.636 [2024-07-24 23:57:53.072642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.636 [2024-07-24 23:57:53.072658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.636 [2024-07-24 23:57:53.072672] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.636 [2024-07-24 23:57:53.072708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.636 [2024-07-24 23:57:53.072826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.636 [2024-07-24 23:57:53.072851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cdb50 with addr=10.0.0.2, port=4420 00:20:22.636 [2024-07-24 23:57:53.072867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cdb50 is same with the state(5) to be set 00:20:22.636 [2024-07-24 23:57:53.072995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.636 [2024-07-24 23:57:53.073020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d3c80 with addr=10.0.0.2, port=4420 00:20:22.636 [2024-07-24 23:57:53.073035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3c80 is same with the state(5) to be set 00:20:22.636 [2024-07-24 23:57:53.073074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cdb50 (9): Bad file descriptor 00:20:22.636 [2024-07-24 23:57:53.073095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d3c80 (9): Bad file descriptor 00:20:22.636 [2024-07-24 23:57:53.073156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:22.636 [2024-07-24 23:57:53.073174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:22.636 [2024-07-24 23:57:53.073188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:22.636 [2024-07-24 23:57:53.073205] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:22.636 [2024-07-24 23:57:53.073218] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:22.636 [2024-07-24 23:57:53.073231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:22.636 [2024-07-24 23:57:53.073272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:22.636 [2024-07-24 23:57:53.073294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:22.636 [2024-07-24 23:57:53.073310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.636 [2024-07-24 23:57:53.073323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.636 [2024-07-24 23:57:53.073462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.636 [2024-07-24 23:57:53.073488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x257e7a0 with addr=10.0.0.2, port=4420 00:20:22.636 [2024-07-24 23:57:53.073504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257e7a0 is same with the state(5) to be set 00:20:22.636 [2024-07-24 23:57:53.073613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.636 [2024-07-24 23:57:53.073639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cc400 with addr=10.0.0.2, port=4420 00:20:22.636 [2024-07-24 23:57:53.073655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc400 is same with the state(5) to be set 00:20:22.636 [2024-07-24 23:57:53.073693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257e7a0 (9): Bad file descriptor 00:20:22.636 [2024-07-24 23:57:53.073714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cc400 (9): Bad file descriptor 00:20:22.636 [2024-07-24 23:57:53.073751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:22.636 [2024-07-24 23:57:53.073767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:22.636 [2024-07-24 23:57:53.073780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:22.636 [2024-07-24 23:57:53.073797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:22.636 [2024-07-24 23:57:53.073811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:22.636 [2024-07-24 23:57:53.073823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:22.636 [2024-07-24 23:57:53.073867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.636 [2024-07-24 23:57:53.073889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.636 [2024-07-24 23:57:53.073926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:22.636 [2024-07-24 23:57:53.074064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.636 [2024-07-24 23:57:53.074092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2661cd0 with addr=10.0.0.2, port=4420 00:20:22.636 [2024-07-24 23:57:53.074108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2661cd0 is same with the state(5) to be set 00:20:22.636 [2024-07-24 23:57:53.074146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2661cd0 (9): Bad file descriptor 00:20:22.636 [2024-07-24 23:57:53.074183] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:22.636 [2024-07-24 23:57:53.074198] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:22.636 [2024-07-24 23:57:53.074211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:22.636 [2024-07-24 23:57:53.074254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.636 [2024-07-24 23:57:53.074384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.636 [2024-07-24 23:57:53.074407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.636 [2024-07-24 23:57:53.074433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.636 [2024-07-24 23:57:53.074448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.636 [2024-07-24 23:57:53.074465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.636 [2024-07-24 23:57:53.074479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.636 [2024-07-24 23:57:53.074495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.636 [2024-07-24 23:57:53.074508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.636 [2024-07-24 23:57:53.074524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.636 [2024-07-24 23:57:53.074537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.636 [2024-07-24 23:57:53.074552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.636 [2024-07-24 23:57:53.074565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.636 [2024-07-24 23:57:53.074581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.636 [2024-07-24 23:57:53.074594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.636 [2024-07-24 23:57:53.074609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.636 [2024-07-24 23:57:53.074622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.636 [2024-07-24 23:57:53.074637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.636 [2024-07-24 23:57:53.074655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.636 [2024-07-24 23:57:53.074671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.636 [2024-07-24 23:57:53.074684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.636 [2024-07-24 23:57:53.074700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.636 [2024-07-24 23:57:53.074713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.636 [2024-07-24 23:57:53.074728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.636 [2024-07-24 23:57:53.074741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.636 [2024-07-24 23:57:53.074756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.074770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.074785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.074798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.074813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.074826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.074841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.074855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.074870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.074883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.074898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.074911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.074926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.074939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.074954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.074968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.074983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.074996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.637 [2024-07-24 23:57:53.075891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.637 [2024-07-24 23:57:53.075904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.075919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.075933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.075949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.075962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.075977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.075990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.076006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.076019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.076034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.076048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.076063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.076077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.076092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.076106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.076122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.076135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.076154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.076168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.076183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.076197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.076212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.076225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.076247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.076263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.076279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.076294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.076308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2552a60 is same with the state(5) to be set 00:20:22.638 [2024-07-24 23:57:53.077565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.077588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.077608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.077623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.077639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.077653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.077669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.077683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.077698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.077712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.077727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.077741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.077756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.077769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.077790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.077804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.077820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.077833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.077848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.077862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.077878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.077891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.077906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.077920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.077935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.077949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.077964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.077979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.077995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.078008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.078025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.078039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.078054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.078067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.078083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.078097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.078112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.078125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.078141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.078158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.638 [2024-07-24 23:57:53.078174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.638 [2024-07-24 23:57:53.078188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.078972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.078988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.079001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.079017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.079031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.079047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.079060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.079076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.079089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.079104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.079118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.079133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.079147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.079162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.079176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.079191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.079205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.079220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.079234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.079261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.079276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.079296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.079311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.079327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.079340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.639 [2024-07-24 23:57:53.079355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.639 [2024-07-24 23:57:53.079369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.640 [2024-07-24 23:57:53.079385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.640 [2024-07-24 23:57:53.079398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.640 [2024-07-24 23:57:53.079413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.640 [2024-07-24 23:57:53.079427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.640 [2024-07-24 23:57:53.079443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.640 [2024-07-24 23:57:53.079456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.640 [2024-07-24 23:57:53.079471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:22.640 [2024-07-24 23:57:53.079485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:22.640 [2024-07-24 23:57:53.079499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2553e00 is same with the state(5) to be set 00:20:22.640 [2024-07-24 23:57:53.081097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:22.640 task offset: 24576 on job bdev=Nvme1n1 fails 00:20:22.640 00:20:22.640 Latency(us) 00:20:22.640 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.640 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.640 Job: Nvme1n1 ended in about 0.87 seconds with error 00:20:22.640 Verification LBA range: start 0x0 length 0x400 00:20:22.640 Nvme1n1 : 0.87 221.77 13.86 73.92 0.00 213876.24 16505.36 254765.13 00:20:22.640 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.640 Job: Nvme2n1 ended in about 0.89 seconds with error 00:20:22.640 Verification LBA range: start 0x0 length 0x400 00:20:22.640 Nvme2n1 : 0.89 143.14 8.95 71.57 0.00 288616.04 22233.69 274959.93 00:20:22.640 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.640 Job: Nvme3n1 ended in about 0.90 seconds with error 00:20:22.640 Verification LBA range: start 0x0 length 0x400 00:20:22.640 Nvme3n1 : 0.90 142.63 8.91 71.32 0.00 283624.99 22622.06 254765.13 00:20:22.640 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.640 Job: Nvme4n1 ended in about 0.89 seconds with error 00:20:22.640 Verification LBA range: start 0x0 length 0x400 00:20:22.640 Nvme4n1 : 0.89 216.58 13.54 72.19 0.00 205380.27 7767.23 273406.48 00:20:22.640 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.640 Verification LBA range: start 0x0 length 0x400 00:20:22.640 Nvme5n1 : 0.88 217.53 13.60 0.00 0.00 266599.28 19126.80 256318.58 00:20:22.640 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.640 Job: Nvme6n1 ended in about 0.91 seconds with error 00:20:22.640 Verification LBA range: start 0x0 length 0x400 00:20:22.640 Nvme6n1 : 0.91 140.09 8.76 70.04 0.00 270829.54 53982.25 250104.79 00:20:22.640 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.640 Job: Nvme7n1 ended in about 0.92 seconds with error 00:20:22.640 Verification LBA range: start 0x0 length 0x400 00:20:22.640 Nvme7n1 : 0.92 139.60 8.72 69.80 0.00 265958.84 35535.08 273406.48 00:20:22.640 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.640 Job: Nvme8n1 ended in about 0.92 seconds with error 00:20:22.640 Verification LBA range: start 0x0 length 0x400 00:20:22.640 Nvme8n1 : 0.92 207.88 12.99 69.29 0.00 196522.67 16311.18 243891.01 00:20:22.640 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.640 Job: Nvme9n1 ended in about 0.93 seconds with error 00:20:22.640 Verification LBA range: start 0x0 length 0x400 00:20:22.640 Nvme9n1 : 0.93 138.12 8.63 69.06 0.00 257320.08 16408.27 250104.79 00:20:22.640 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:22.640 Job: Nvme10n1 ended in about 0.88 seconds with error 00:20:22.640 Verification LBA range: start 0x0 length 0x400 00:20:22.640 Nvme10n1 : 0.88 146.11 9.13 73.06 0.00 234651.75 15534.46 279620.27 00:20:22.640 =================================================================================================================== 00:20:22.640 Total : 1713.44 107.09 640.25 0.00 244421.77 7767.23 279620.27 00:20:22.640 [2024-07-24 23:57:53.110252] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:22.640 [2024-07-24 23:57:53.110335] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:22.640 [2024-07-24 23:57:53.110840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.640 [2024-07-24 23:57:53.110881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2670570 with addr=10.0.0.2, port=4420 00:20:22.640 [2024-07-24 23:57:53.110901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2670570 is same with the state(5) to be set 00:20:22.640 [2024-07-24 23:57:53.111015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.640 [2024-07-24 23:57:53.111041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d9280 with addr=10.0.0.2, port=4420 00:20:22.640 [2024-07-24 23:57:53.111058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d9280 is same with the state(5) to be set 00:20:22.640 [2024-07-24 23:57:53.111140] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:22.640 [2024-07-24 23:57:53.111166] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:22.640 [2024-07-24 23:57:53.111784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:22.640 [2024-07-24 23:57:53.111813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:22.640 [2024-07-24 23:57:53.111888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2670570 (9): Bad file descriptor 00:20:22.640 [2024-07-24 23:57:53.111917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d9280 (9): Bad file descriptor 00:20:22.640 [2024-07-24 23:57:53.112287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:22.640 [2024-07-24 23:57:53.112316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:22.640 [2024-07-24 23:57:53.112334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:22.640 [2024-07-24 23:57:53.112362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:22.640 [2024-07-24 23:57:53.112379] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:22.640 [2024-07-24 23:57:53.112533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.640 [2024-07-24 23:57:53.112561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fab610 with addr=10.0.0.2, port=4420 00:20:22.640 [2024-07-24 23:57:53.112577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fab610 is same with the state(5) to be set 00:20:22.640 [2024-07-24 23:57:53.112686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.640 [2024-07-24 23:57:53.112711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d8910 with addr=10.0.0.2, port=4420 00:20:22.640 [2024-07-24 23:57:53.112727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d8910 is same with the state(5) to be set 00:20:22.640 [2024-07-24 23:57:53.112742] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:22.640 [2024-07-24 23:57:53.112754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:22.640 [2024-07-24 23:57:53.112769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:22.640 [2024-07-24 23:57:53.112788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:22.640 [2024-07-24 23:57:53.112802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:22.640 [2024-07-24 23:57:53.112816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:22.640 [2024-07-24 23:57:53.112867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:22.640 [2024-07-24 23:57:53.112899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.640 [2024-07-24 23:57:53.112917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.640 [2024-07-24 23:57:53.113040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.640 [2024-07-24 23:57:53.113065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24a9830 with addr=10.0.0.2, port=4420 00:20:22.640 [2024-07-24 23:57:53.113081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24a9830 is same with the state(5) to be set 00:20:22.640 [2024-07-24 23:57:53.113208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.640 [2024-07-24 23:57:53.113234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24d3c80 with addr=10.0.0.2, port=4420 00:20:22.640 [2024-07-24 23:57:53.113258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24d3c80 is same with the state(5) to be set 00:20:22.640 [2024-07-24 23:57:53.113379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.640 [2024-07-24 23:57:53.113405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cdb50 with addr=10.0.0.2, port=4420 00:20:22.640 [2024-07-24 23:57:53.113420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cdb50 is same with the state(5) to be set 00:20:22.640 [2024-07-24 23:57:53.113520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.640 [2024-07-24 23:57:53.113546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24cc400 with addr=10.0.0.2, port=4420 00:20:22.640 [2024-07-24 23:57:53.113562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cc400 is same with the state(5) to be set 00:20:22.640 [2024-07-24 23:57:53.113663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.640 [2024-07-24 23:57:53.113694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x257e7a0 with addr=10.0.0.2, port=4420 00:20:22.640 [2024-07-24 23:57:53.113711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257e7a0 is same with the state(5) to be set 00:20:22.640 [2024-07-24 23:57:53.113730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fab610 (9): Bad file descriptor 00:20:22.641 [2024-07-24 23:57:53.113749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d8910 (9): Bad file descriptor 00:20:22.641 [2024-07-24 23:57:53.113890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:22.641 [2024-07-24 23:57:53.113917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2661cd0 with addr=10.0.0.2, port=4420 00:20:22.641 [2024-07-24 23:57:53.113933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2661cd0 is same with the state(5) to be set 00:20:22.641 [2024-07-24 23:57:53.113952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24a9830 (9): Bad file descriptor 00:20:22.641 [2024-07-24 23:57:53.113971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24d3c80 (9): Bad file descriptor 00:20:22.641 [2024-07-24 23:57:53.113989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cdb50 (9): Bad file descriptor 00:20:22.641 [2024-07-24 23:57:53.114006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24cc400 (9): Bad file descriptor 00:20:22.641 [2024-07-24 23:57:53.114023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257e7a0 (9): Bad file descriptor 00:20:22.641 [2024-07-24 23:57:53.114038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:22.641 [2024-07-24 23:57:53.114051] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:22.641 [2024-07-24 23:57:53.114064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:22.641 [2024-07-24 23:57:53.114080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:22.641 [2024-07-24 23:57:53.114094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:22.641 [2024-07-24 23:57:53.114107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:22.641 [2024-07-24 23:57:53.114144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.641 [2024-07-24 23:57:53.114161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.641 [2024-07-24 23:57:53.114177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2661cd0 (9): Bad file descriptor 00:20:22.641 [2024-07-24 23:57:53.114193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.641 [2024-07-24 23:57:53.114206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:22.641 [2024-07-24 23:57:53.114218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.641 [2024-07-24 23:57:53.114234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:22.641 [2024-07-24 23:57:53.114256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:22.641 [2024-07-24 23:57:53.114270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:22.641 [2024-07-24 23:57:53.114285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:22.641 [2024-07-24 23:57:53.114299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:22.641 [2024-07-24 23:57:53.114311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:22.641 [2024-07-24 23:57:53.114331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:22.641 [2024-07-24 23:57:53.114345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:22.641 [2024-07-24 23:57:53.114357] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:22.641 [2024-07-24 23:57:53.114372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:22.641 [2024-07-24 23:57:53.114385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:22.641 [2024-07-24 23:57:53.114397] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:22.641 [2024-07-24 23:57:53.114435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.641 [2024-07-24 23:57:53.114452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.641 [2024-07-24 23:57:53.114464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.641 [2024-07-24 23:57:53.114474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.641 [2024-07-24 23:57:53.114485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:22.641 [2024-07-24 23:57:53.114496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:22.641 [2024-07-24 23:57:53.114508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:22.641 [2024-07-24 23:57:53.114521] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:22.641 [2024-07-24 23:57:53.114558] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:23.206 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:23.206 23:57:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 3420956 00:20:24.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (3420956) - No such process 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:24.139 rmmod nvme_tcp 00:20:24.139 rmmod nvme_fabrics 00:20:24.139 rmmod nvme_keyring 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:24.139 23:57:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.667 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:26.667 00:20:26.667 real 0m8.207s 00:20:26.667 user 0m20.843s 00:20:26.667 sys 0m1.580s 00:20:26.667 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:26.667 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:26.667 ************************************ 00:20:26.667 END TEST nvmf_shutdown_tc3 00:20:26.667 ************************************ 00:20:26.667 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:26.667 00:20:26.667 real 0m28.528s 00:20:26.667 user 1m20.224s 00:20:26.667 sys 0m6.640s 00:20:26.667 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:26.667 23:57:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:26.667 ************************************ 00:20:26.667 END TEST nvmf_shutdown 00:20:26.667 ************************************ 00:20:26.667 23:57:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:20:26.667 00:20:26.667 real 10m32.859s 00:20:26.667 user 25m14.991s 00:20:26.667 sys 2m29.424s 00:20:26.667 23:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:26.667 23:57:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:26.667 ************************************ 00:20:26.667 END TEST nvmf_target_extra 00:20:26.667 ************************************ 00:20:26.667 23:57:56 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:26.667 23:57:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:26.667 23:57:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.667 23:57:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:26.667 ************************************ 00:20:26.667 START TEST nvmf_host 00:20:26.667 ************************************ 00:20:26.667 23:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:26.667 * Looking for test storage... 00:20:26.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:20:26.667 23:57:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.667 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:26.667 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.667 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.667 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.667 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.667 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.667 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.667 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.667 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.667 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.667 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.667 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:26.668 ************************************ 00:20:26.668 START TEST nvmf_multicontroller 00:20:26.668 ************************************ 00:20:26.668 23:57:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:26.668 * Looking for test storage... 00:20:26.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:26.668 23:57:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:28.566 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.566 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.566 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.566 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.566 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.566 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.566 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.566 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.566 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.566 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:28.566 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.566 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:28.566 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.566 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:28.567 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:28.567 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:28.567 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:28.567 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:28.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:20:28.567 00:20:28.567 --- 10.0.0.2 ping statistics --- 00:20:28.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.567 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:20:28.567 00:20:28.567 --- 10.0.0.1 ping statistics --- 00:20:28.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.567 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=3423514 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:28.567 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 3423514 00:20:28.825 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3423514 ']' 00:20:28.825 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.825 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:28.825 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.825 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:28.825 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:28.825 [2024-07-24 23:57:59.220188] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:20:28.825 [2024-07-24 23:57:59.220266] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.825 EAL: No free 2048 kB hugepages reported on node 1 00:20:28.825 [2024-07-24 23:57:59.282653] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:28.825 [2024-07-24 23:57:59.388974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.825 [2024-07-24 23:57:59.389034] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.825 [2024-07-24 23:57:59.389047] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.825 [2024-07-24 23:57:59.389058] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.825 [2024-07-24 23:57:59.389082] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.825 [2024-07-24 23:57:59.389178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.825 [2024-07-24 23:57:59.389251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.825 [2024-07-24 23:57:59.389276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.083 [2024-07-24 23:57:59.518355] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.083 Malloc0 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:29.083 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.084 [2024-07-24 23:57:59.573475] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.084 [2024-07-24 23:57:59.581351] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.084 Malloc1 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=3423536 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 3423536 /var/tmp/bdevperf.sock 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 3423536 ']' 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:29.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:29.084 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.648 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:29.648 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:20:29.648 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:29.648 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.648 23:57:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.649 NVMe0n1 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.649 1 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.649 request: 00:20:29.649 { 00:20:29.649 "name": "NVMe0", 00:20:29.649 "trtype": "tcp", 00:20:29.649 "traddr": "10.0.0.2", 00:20:29.649 "adrfam": "ipv4", 00:20:29.649 "trsvcid": "4420", 00:20:29.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.649 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:29.649 "hostaddr": "10.0.0.2", 00:20:29.649 "hostsvcid": "60000", 00:20:29.649 "prchk_reftag": false, 00:20:29.649 "prchk_guard": false, 00:20:29.649 "hdgst": false, 00:20:29.649 "ddgst": false, 00:20:29.649 "method": "bdev_nvme_attach_controller", 00:20:29.649 "req_id": 1 00:20:29.649 } 00:20:29.649 Got JSON-RPC error response 00:20:29.649 response: 00:20:29.649 { 00:20:29.649 "code": -114, 00:20:29.649 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:29.649 } 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.649 request: 00:20:29.649 { 00:20:29.649 "name": "NVMe0", 00:20:29.649 "trtype": "tcp", 00:20:29.649 "traddr": "10.0.0.2", 00:20:29.649 "adrfam": "ipv4", 00:20:29.649 "trsvcid": "4420", 00:20:29.649 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:29.649 "hostaddr": "10.0.0.2", 00:20:29.649 "hostsvcid": "60000", 00:20:29.649 "prchk_reftag": false, 00:20:29.649 "prchk_guard": false, 00:20:29.649 "hdgst": false, 00:20:29.649 "ddgst": false, 00:20:29.649 "method": "bdev_nvme_attach_controller", 00:20:29.649 "req_id": 1 00:20:29.649 } 00:20:29.649 Got JSON-RPC error response 00:20:29.649 response: 00:20:29.649 { 00:20:29.649 "code": -114, 00:20:29.649 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:29.649 } 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.649 request: 00:20:29.649 { 00:20:29.649 "name": "NVMe0", 00:20:29.649 "trtype": "tcp", 00:20:29.649 "traddr": "10.0.0.2", 00:20:29.649 "adrfam": "ipv4", 00:20:29.649 "trsvcid": "4420", 00:20:29.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.649 "hostaddr": "10.0.0.2", 00:20:29.649 "hostsvcid": "60000", 00:20:29.649 "prchk_reftag": false, 00:20:29.649 "prchk_guard": false, 00:20:29.649 "hdgst": false, 00:20:29.649 "ddgst": false, 00:20:29.649 "multipath": "disable", 00:20:29.649 "method": "bdev_nvme_attach_controller", 00:20:29.649 "req_id": 1 00:20:29.649 } 00:20:29.649 Got JSON-RPC error response 00:20:29.649 response: 00:20:29.649 { 00:20:29.649 "code": -114, 00:20:29.649 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:29.649 } 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.649 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.649 request: 00:20:29.649 { 00:20:29.650 "name": "NVMe0", 00:20:29.650 "trtype": "tcp", 00:20:29.650 "traddr": "10.0.0.2", 00:20:29.650 "adrfam": "ipv4", 00:20:29.650 "trsvcid": "4420", 00:20:29.650 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:29.650 "hostaddr": "10.0.0.2", 00:20:29.650 "hostsvcid": "60000", 00:20:29.650 "prchk_reftag": false, 00:20:29.650 "prchk_guard": false, 00:20:29.650 "hdgst": false, 00:20:29.650 "ddgst": false, 00:20:29.650 "multipath": "failover", 00:20:29.650 "method": "bdev_nvme_attach_controller", 00:20:29.650 "req_id": 1 00:20:29.650 } 00:20:29.650 Got JSON-RPC error response 00:20:29.650 response: 00:20:29.650 { 00:20:29.650 "code": -114, 00:20:29.650 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:29.650 } 00:20:29.650 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:20:29.650 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:20:29.650 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:29.650 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:29.650 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:29.650 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:29.650 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.650 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.906 00:20:29.906 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.906 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:29.906 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.906 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:29.906 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.906 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:29.906 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.906 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.163 00:20:30.163 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.163 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:30.163 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:30.163 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.163 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:30.163 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.163 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:30.163 23:58:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:31.533 0 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 3423536 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3423536 ']' 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3423536 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3423536 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3423536' 00:20:31.533 killing process with pid 3423536 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3423536 00:20:31.533 23:58:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3423536 00:20:31.533 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:31.533 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.533 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:31.533 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.533 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:31.533 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.533 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:31.533 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.533 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:31.533 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:31.533 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:20:31.533 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:31.533 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # sort -u 00:20:31.533 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # cat 00:20:31.533 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:31.533 [2024-07-24 23:57:59.679305] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:20:31.533 [2024-07-24 23:57:59.679405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3423536 ] 00:20:31.533 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.533 [2024-07-24 23:57:59.740425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:31.533 [2024-07-24 23:57:59.849959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.533 [2024-07-24 23:58:00.575671] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 5704f2c9-1ace-43d8-93f6-408d5d9f83e0 already exists 00:20:31.533 [2024-07-24 23:58:00.575711] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:5704f2c9-1ace-43d8-93f6-408d5d9f83e0 alias for bdev NVMe1n1 00:20:31.533 [2024-07-24 23:58:00.575742] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:31.533 Running I/O for 1 seconds... 00:20:31.533 00:20:31.533 Latency(us) 00:20:31.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.533 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:31.533 NVMe0n1 : 1.00 19101.47 74.62 0.00 0.00 6689.97 6213.78 17282.09 00:20:31.533 =================================================================================================================== 00:20:31.533 Total : 19101.47 74.62 0.00 0.00 6689.97 6213.78 17282.09 00:20:31.533 Received shutdown signal, test time was about 1.000000 seconds 00:20:31.533 00:20:31.533 Latency(us) 00:20:31.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:31.533 =================================================================================================================== 00:20:31.534 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:31.534 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1616 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # read -r file 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:31.534 rmmod nvme_tcp 00:20:31.534 rmmod nvme_fabrics 00:20:31.534 rmmod nvme_keyring 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 3423514 ']' 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 3423514 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 3423514 ']' 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 3423514 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:31.534 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3423514 00:20:31.791 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:31.791 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:31.791 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3423514' 00:20:31.791 killing process with pid 3423514 00:20:31.791 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 3423514 00:20:31.791 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 3423514 00:20:32.049 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:32.049 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:32.049 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:32.049 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:32.049 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:32.049 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.049 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:32.049 23:58:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.945 23:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:33.945 00:20:33.945 real 0m7.562s 00:20:33.945 user 0m12.046s 00:20:33.945 sys 0m2.327s 00:20:33.945 23:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:33.945 23:58:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:33.945 ************************************ 00:20:33.945 END TEST nvmf_multicontroller 00:20:33.945 ************************************ 00:20:33.945 23:58:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:33.945 23:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:33.945 23:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:33.945 23:58:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:34.203 ************************************ 00:20:34.203 START TEST nvmf_aer 00:20:34.203 ************************************ 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:34.203 * Looking for test storage... 00:20:34.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.203 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:34.204 23:58:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:36.102 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:36.103 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:36.103 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:36.103 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:36.103 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:36.103 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:36.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:36.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:20:36.361 00:20:36.361 --- 10.0.0.2 ping statistics --- 00:20:36.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.361 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:36.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:36.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:20:36.361 00:20:36.361 --- 10.0.0.1 ping statistics --- 00:20:36.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:36.361 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=3425864 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 3425864 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 3425864 ']' 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:36.361 23:58:06 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:36.361 [2024-07-24 23:58:06.826552] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:20:36.361 [2024-07-24 23:58:06.826641] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:36.361 EAL: No free 2048 kB hugepages reported on node 1 00:20:36.361 [2024-07-24 23:58:06.895441] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:36.618 [2024-07-24 23:58:07.014352] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:36.618 [2024-07-24 23:58:07.014407] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:36.618 [2024-07-24 23:58:07.014424] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:36.618 [2024-07-24 23:58:07.014438] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:36.618 [2024-07-24 23:58:07.014449] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:36.618 [2024-07-24 23:58:07.014527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:36.618 [2024-07-24 23:58:07.014598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:36.618 [2024-07-24 23:58:07.014701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:36.618 [2024-07-24 23:58:07.014702] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 [2024-07-24 23:58:07.823905] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 Malloc0 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 [2024-07-24 23:58:07.876331] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 [ 00:20:37.551 { 00:20:37.551 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:37.551 "subtype": "Discovery", 00:20:37.551 "listen_addresses": [], 00:20:37.551 "allow_any_host": true, 00:20:37.551 "hosts": [] 00:20:37.551 }, 00:20:37.551 { 00:20:37.551 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.551 "subtype": "NVMe", 00:20:37.551 "listen_addresses": [ 00:20:37.551 { 00:20:37.551 "trtype": "TCP", 00:20:37.551 "adrfam": "IPv4", 00:20:37.551 "traddr": "10.0.0.2", 00:20:37.551 "trsvcid": "4420" 00:20:37.551 } 00:20:37.551 ], 00:20:37.551 "allow_any_host": true, 00:20:37.551 "hosts": [], 00:20:37.551 "serial_number": "SPDK00000000000001", 00:20:37.551 "model_number": "SPDK bdev Controller", 00:20:37.551 "max_namespaces": 2, 00:20:37.551 "min_cntlid": 1, 00:20:37.551 "max_cntlid": 65519, 00:20:37.551 "namespaces": [ 00:20:37.551 { 00:20:37.551 "nsid": 1, 00:20:37.551 "bdev_name": "Malloc0", 00:20:37.551 "name": "Malloc0", 00:20:37.551 "nguid": "3835B47C7B184D44A04A32F04E98C704", 00:20:37.551 "uuid": "3835b47c-7b18-4d44-a04a-32f04e98c704" 00:20:37.551 } 00:20:37.551 ] 00:20:37.551 } 00:20:37.551 ] 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3426021 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1263 -- # local i=0 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 0 -lt 200 ']' 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=1 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:20:37.551 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # '[' 1 -lt 200 ']' 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # i=2 00:20:37.551 23:58:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # sleep 0.1 00:20:37.551 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:37.551 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:37.551 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # return 0 00:20:37.551 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:37.551 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.551 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 Malloc1 00:20:37.551 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.551 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:37.551 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.551 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.551 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.551 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:37.551 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.551 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.809 [ 00:20:37.809 { 00:20:37.809 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:37.809 "subtype": "Discovery", 00:20:37.809 "listen_addresses": [], 00:20:37.809 "allow_any_host": true, 00:20:37.809 "hosts": [] 00:20:37.809 }, 00:20:37.809 { 00:20:37.809 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.809 "subtype": "NVMe", 00:20:37.809 "listen_addresses": [ 00:20:37.809 { 00:20:37.809 "trtype": "TCP", 00:20:37.809 "adrfam": "IPv4", 00:20:37.809 "traddr": "10.0.0.2", 00:20:37.809 "trsvcid": "4420" 00:20:37.809 } 00:20:37.809 ], 00:20:37.809 "allow_any_host": true, 00:20:37.809 "hosts": [], 00:20:37.809 "serial_number": "SPDK00000000000001", 00:20:37.809 "model_number": "SPDK bdev Controller", 00:20:37.809 "max_namespaces": 2, 00:20:37.809 "min_cntlid": 1, 00:20:37.809 "max_cntlid": 65519, 00:20:37.809 "namespaces": [ 00:20:37.809 { 00:20:37.809 "nsid": 1, 00:20:37.809 "bdev_name": "Malloc0", 00:20:37.809 "name": "Malloc0", 00:20:37.809 "nguid": "3835B47C7B184D44A04A32F04E98C704", 00:20:37.809 "uuid": "3835b47c-7b18-4d44-a04a-32f04e98c704" 00:20:37.809 }, 00:20:37.809 { 00:20:37.809 "nsid": 2, 00:20:37.809 "bdev_name": "Malloc1", 00:20:37.809 "name": "Malloc1", 00:20:37.809 "nguid": "C1C40FBFA5984454A7BD0756BF91B8E3", 00:20:37.809 "uuid": "c1c40fbf-a598-4454-a7bd-0756bf91b8e3" 00:20:37.809 } 00:20:37.809 ] 00:20:37.809 } 00:20:37.809 ] 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3426021 00:20:37.809 Asynchronous Event Request test 00:20:37.809 Attaching to 10.0.0.2 00:20:37.809 Attached to 10.0.0.2 00:20:37.809 Registering asynchronous event callbacks... 00:20:37.809 Starting namespace attribute notice tests for all controllers... 00:20:37.809 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:37.809 aer_cb - Changed Namespace 00:20:37.809 Cleaning up... 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:37.809 rmmod nvme_tcp 00:20:37.809 rmmod nvme_fabrics 00:20:37.809 rmmod nvme_keyring 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 3425864 ']' 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 3425864 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 3425864 ']' 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 3425864 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3425864 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3425864' 00:20:37.809 killing process with pid 3425864 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 3425864 00:20:37.809 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 3425864 00:20:38.067 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:38.067 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:38.067 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:38.067 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:38.067 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:38.067 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:38.067 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:38.067 23:58:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:40.598 00:20:40.598 real 0m6.098s 00:20:40.598 user 0m7.182s 00:20:40.598 sys 0m1.923s 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:40.598 ************************************ 00:20:40.598 END TEST nvmf_aer 00:20:40.598 ************************************ 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.598 ************************************ 00:20:40.598 START TEST nvmf_async_init 00:20:40.598 ************************************ 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:40.598 * Looking for test storage... 00:20:40.598 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:40.598 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ef26d456e4384e8a9eafbd409b9c97d3 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:40.599 23:58:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:41.991 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:41.991 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:41.991 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:41.991 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:41.991 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:41.991 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:41.991 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:41.991 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:41.991 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:41.991 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:41.992 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:41.992 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:41.992 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:41.992 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:41.992 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:42.251 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.251 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:20:42.251 00:20:42.251 --- 10.0.0.2 ping statistics --- 00:20:42.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.251 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.251 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.251 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:20:42.251 00:20:42.251 --- 10.0.0.1 ping statistics --- 00:20:42.251 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.251 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=3427954 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 3427954 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 3427954 ']' 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.251 23:58:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.251 [2024-07-24 23:58:12.793808] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:20:42.251 [2024-07-24 23:58:12.793890] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.251 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.251 [2024-07-24 23:58:12.859185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.522 [2024-07-24 23:58:12.975444] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.522 [2024-07-24 23:58:12.975507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.522 [2024-07-24 23:58:12.975524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.522 [2024-07-24 23:58:12.975538] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.522 [2024-07-24 23:58:12.975549] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.522 [2024-07-24 23:58:12.975580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.522 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:42.522 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:20:42.522 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:42.522 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:42.522 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.522 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:42.522 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:42.522 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.522 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.522 [2024-07-24 23:58:13.123736] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:42.522 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.522 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:42.522 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.522 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.780 null0 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ef26d456e4384e8a9eafbd409b9c97d3 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:42.780 [2024-07-24 23:58:13.163986] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.780 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.037 nvme0n1 00:20:43.037 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.037 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:43.037 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.037 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.037 [ 00:20:43.037 { 00:20:43.037 "name": "nvme0n1", 00:20:43.037 "aliases": [ 00:20:43.037 "ef26d456-e438-4e8a-9eaf-bd409b9c97d3" 00:20:43.037 ], 00:20:43.037 "product_name": "NVMe disk", 00:20:43.037 "block_size": 512, 00:20:43.037 "num_blocks": 2097152, 00:20:43.037 "uuid": "ef26d456-e438-4e8a-9eaf-bd409b9c97d3", 00:20:43.037 "assigned_rate_limits": { 00:20:43.037 "rw_ios_per_sec": 0, 00:20:43.037 "rw_mbytes_per_sec": 0, 00:20:43.037 "r_mbytes_per_sec": 0, 00:20:43.037 "w_mbytes_per_sec": 0 00:20:43.037 }, 00:20:43.037 "claimed": false, 00:20:43.037 "zoned": false, 00:20:43.037 "supported_io_types": { 00:20:43.037 "read": true, 00:20:43.037 "write": true, 00:20:43.037 "unmap": false, 00:20:43.037 "flush": true, 00:20:43.037 "reset": true, 00:20:43.037 "nvme_admin": true, 00:20:43.037 "nvme_io": true, 00:20:43.037 "nvme_io_md": false, 00:20:43.037 "write_zeroes": true, 00:20:43.037 "zcopy": false, 00:20:43.037 "get_zone_info": false, 00:20:43.037 "zone_management": false, 00:20:43.037 "zone_append": false, 00:20:43.037 "compare": true, 00:20:43.037 "compare_and_write": true, 00:20:43.037 "abort": true, 00:20:43.037 "seek_hole": false, 00:20:43.037 "seek_data": false, 00:20:43.037 "copy": true, 00:20:43.037 "nvme_iov_md": false 00:20:43.037 }, 00:20:43.037 "memory_domains": [ 00:20:43.037 { 00:20:43.037 "dma_device_id": "system", 00:20:43.037 "dma_device_type": 1 00:20:43.037 } 00:20:43.037 ], 00:20:43.037 "driver_specific": { 00:20:43.037 "nvme": [ 00:20:43.037 { 00:20:43.037 "trid": { 00:20:43.037 "trtype": "TCP", 00:20:43.037 "adrfam": "IPv4", 00:20:43.037 "traddr": "10.0.0.2", 00:20:43.037 "trsvcid": "4420", 00:20:43.037 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:43.037 }, 00:20:43.037 "ctrlr_data": { 00:20:43.038 "cntlid": 1, 00:20:43.038 "vendor_id": "0x8086", 00:20:43.038 "model_number": "SPDK bdev Controller", 00:20:43.038 "serial_number": "00000000000000000000", 00:20:43.038 "firmware_revision": "24.09", 00:20:43.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:43.038 "oacs": { 00:20:43.038 "security": 0, 00:20:43.038 "format": 0, 00:20:43.038 "firmware": 0, 00:20:43.038 "ns_manage": 0 00:20:43.038 }, 00:20:43.038 "multi_ctrlr": true, 00:20:43.038 "ana_reporting": false 00:20:43.038 }, 00:20:43.038 "vs": { 00:20:43.038 "nvme_version": "1.3" 00:20:43.038 }, 00:20:43.038 "ns_data": { 00:20:43.038 "id": 1, 00:20:43.038 "can_share": true 00:20:43.038 } 00:20:43.038 } 00:20:43.038 ], 00:20:43.038 "mp_policy": "active_passive" 00:20:43.038 } 00:20:43.038 } 00:20:43.038 ] 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.038 [2024-07-24 23:58:13.413141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:43.038 [2024-07-24 23:58:13.413238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9cd1d0 (9): Bad file descriptor 00:20:43.038 [2024-07-24 23:58:13.545377] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.038 [ 00:20:43.038 { 00:20:43.038 "name": "nvme0n1", 00:20:43.038 "aliases": [ 00:20:43.038 "ef26d456-e438-4e8a-9eaf-bd409b9c97d3" 00:20:43.038 ], 00:20:43.038 "product_name": "NVMe disk", 00:20:43.038 "block_size": 512, 00:20:43.038 "num_blocks": 2097152, 00:20:43.038 "uuid": "ef26d456-e438-4e8a-9eaf-bd409b9c97d3", 00:20:43.038 "assigned_rate_limits": { 00:20:43.038 "rw_ios_per_sec": 0, 00:20:43.038 "rw_mbytes_per_sec": 0, 00:20:43.038 "r_mbytes_per_sec": 0, 00:20:43.038 "w_mbytes_per_sec": 0 00:20:43.038 }, 00:20:43.038 "claimed": false, 00:20:43.038 "zoned": false, 00:20:43.038 "supported_io_types": { 00:20:43.038 "read": true, 00:20:43.038 "write": true, 00:20:43.038 "unmap": false, 00:20:43.038 "flush": true, 00:20:43.038 "reset": true, 00:20:43.038 "nvme_admin": true, 00:20:43.038 "nvme_io": true, 00:20:43.038 "nvme_io_md": false, 00:20:43.038 "write_zeroes": true, 00:20:43.038 "zcopy": false, 00:20:43.038 "get_zone_info": false, 00:20:43.038 "zone_management": false, 00:20:43.038 "zone_append": false, 00:20:43.038 "compare": true, 00:20:43.038 "compare_and_write": true, 00:20:43.038 "abort": true, 00:20:43.038 "seek_hole": false, 00:20:43.038 "seek_data": false, 00:20:43.038 "copy": true, 00:20:43.038 "nvme_iov_md": false 00:20:43.038 }, 00:20:43.038 "memory_domains": [ 00:20:43.038 { 00:20:43.038 "dma_device_id": "system", 00:20:43.038 "dma_device_type": 1 00:20:43.038 } 00:20:43.038 ], 00:20:43.038 "driver_specific": { 00:20:43.038 "nvme": [ 00:20:43.038 { 00:20:43.038 "trid": { 00:20:43.038 "trtype": "TCP", 00:20:43.038 "adrfam": "IPv4", 00:20:43.038 "traddr": "10.0.0.2", 00:20:43.038 "trsvcid": "4420", 00:20:43.038 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:43.038 }, 00:20:43.038 "ctrlr_data": { 00:20:43.038 "cntlid": 2, 00:20:43.038 "vendor_id": "0x8086", 00:20:43.038 "model_number": "SPDK bdev Controller", 00:20:43.038 "serial_number": "00000000000000000000", 00:20:43.038 "firmware_revision": "24.09", 00:20:43.038 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:43.038 "oacs": { 00:20:43.038 "security": 0, 00:20:43.038 "format": 0, 00:20:43.038 "firmware": 0, 00:20:43.038 "ns_manage": 0 00:20:43.038 }, 00:20:43.038 "multi_ctrlr": true, 00:20:43.038 "ana_reporting": false 00:20:43.038 }, 00:20:43.038 "vs": { 00:20:43.038 "nvme_version": "1.3" 00:20:43.038 }, 00:20:43.038 "ns_data": { 00:20:43.038 "id": 1, 00:20:43.038 "can_share": true 00:20:43.038 } 00:20:43.038 } 00:20:43.038 ], 00:20:43.038 "mp_policy": "active_passive" 00:20:43.038 } 00:20:43.038 } 00:20:43.038 ] 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.QBYfofUPV0 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.QBYfofUPV0 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.038 [2024-07-24 23:58:13.597749] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.038 [2024-07-24 23:58:13.597917] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QBYfofUPV0 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.038 [2024-07-24 23:58:13.605766] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.QBYfofUPV0 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.038 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.038 [2024-07-24 23:58:13.613789] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:43.038 [2024-07-24 23:58:13.613860] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:43.296 nvme0n1 00:20:43.296 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.296 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.297 [ 00:20:43.297 { 00:20:43.297 "name": "nvme0n1", 00:20:43.297 "aliases": [ 00:20:43.297 "ef26d456-e438-4e8a-9eaf-bd409b9c97d3" 00:20:43.297 ], 00:20:43.297 "product_name": "NVMe disk", 00:20:43.297 "block_size": 512, 00:20:43.297 "num_blocks": 2097152, 00:20:43.297 "uuid": "ef26d456-e438-4e8a-9eaf-bd409b9c97d3", 00:20:43.297 "assigned_rate_limits": { 00:20:43.297 "rw_ios_per_sec": 0, 00:20:43.297 "rw_mbytes_per_sec": 0, 00:20:43.297 "r_mbytes_per_sec": 0, 00:20:43.297 "w_mbytes_per_sec": 0 00:20:43.297 }, 00:20:43.297 "claimed": false, 00:20:43.297 "zoned": false, 00:20:43.297 "supported_io_types": { 00:20:43.297 "read": true, 00:20:43.297 "write": true, 00:20:43.297 "unmap": false, 00:20:43.297 "flush": true, 00:20:43.297 "reset": true, 00:20:43.297 "nvme_admin": true, 00:20:43.297 "nvme_io": true, 00:20:43.297 "nvme_io_md": false, 00:20:43.297 "write_zeroes": true, 00:20:43.297 "zcopy": false, 00:20:43.297 "get_zone_info": false, 00:20:43.297 "zone_management": false, 00:20:43.297 "zone_append": false, 00:20:43.297 "compare": true, 00:20:43.297 "compare_and_write": true, 00:20:43.297 "abort": true, 00:20:43.297 "seek_hole": false, 00:20:43.297 "seek_data": false, 00:20:43.297 "copy": true, 00:20:43.297 "nvme_iov_md": false 00:20:43.297 }, 00:20:43.297 "memory_domains": [ 00:20:43.297 { 00:20:43.297 "dma_device_id": "system", 00:20:43.297 "dma_device_type": 1 00:20:43.297 } 00:20:43.297 ], 00:20:43.297 "driver_specific": { 00:20:43.297 "nvme": [ 00:20:43.297 { 00:20:43.297 "trid": { 00:20:43.297 "trtype": "TCP", 00:20:43.297 "adrfam": "IPv4", 00:20:43.297 "traddr": "10.0.0.2", 00:20:43.297 "trsvcid": "4421", 00:20:43.297 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:43.297 }, 00:20:43.297 "ctrlr_data": { 00:20:43.297 "cntlid": 3, 00:20:43.297 "vendor_id": "0x8086", 00:20:43.297 "model_number": "SPDK bdev Controller", 00:20:43.297 "serial_number": "00000000000000000000", 00:20:43.297 "firmware_revision": "24.09", 00:20:43.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:43.297 "oacs": { 00:20:43.297 "security": 0, 00:20:43.297 "format": 0, 00:20:43.297 "firmware": 0, 00:20:43.297 "ns_manage": 0 00:20:43.297 }, 00:20:43.297 "multi_ctrlr": true, 00:20:43.297 "ana_reporting": false 00:20:43.297 }, 00:20:43.297 "vs": { 00:20:43.297 "nvme_version": "1.3" 00:20:43.297 }, 00:20:43.297 "ns_data": { 00:20:43.297 "id": 1, 00:20:43.297 "can_share": true 00:20:43.297 } 00:20:43.297 } 00:20:43.297 ], 00:20:43.297 "mp_policy": "active_passive" 00:20:43.297 } 00:20:43.297 } 00:20:43.297 ] 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.QBYfofUPV0 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:43.297 rmmod nvme_tcp 00:20:43.297 rmmod nvme_fabrics 00:20:43.297 rmmod nvme_keyring 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 3427954 ']' 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 3427954 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 3427954 ']' 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 3427954 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3427954 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3427954' 00:20:43.297 killing process with pid 3427954 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 3427954 00:20:43.297 [2024-07-24 23:58:13.790749] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:43.297 [2024-07-24 23:58:13.790781] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:43.297 23:58:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 3427954 00:20:43.554 23:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:43.555 23:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:43.555 23:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:43.555 23:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:43.555 23:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:43.555 23:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.555 23:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.555 23:58:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:46.084 00:20:46.084 real 0m5.372s 00:20:46.084 user 0m2.056s 00:20:46.084 sys 0m1.725s 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:46.084 ************************************ 00:20:46.084 END TEST nvmf_async_init 00:20:46.084 ************************************ 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.084 ************************************ 00:20:46.084 START TEST dma 00:20:46.084 ************************************ 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:46.084 * Looking for test storage... 00:20:46.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.084 23:58:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:20:46.085 00:20:46.085 real 0m0.068s 00:20:46.085 user 0m0.033s 00:20:46.085 sys 0m0.040s 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:46.085 ************************************ 00:20:46.085 END TEST dma 00:20:46.085 ************************************ 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:46.085 ************************************ 00:20:46.085 START TEST nvmf_identify 00:20:46.085 ************************************ 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:46.085 * Looking for test storage... 00:20:46.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:46.085 23:58:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:47.984 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:47.984 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:47.984 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:47.984 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:47.984 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:47.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:20:47.985 00:20:47.985 --- 10.0.0.2 ping statistics --- 00:20:47.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.985 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:47.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:20:47.985 00:20:47.985 --- 10.0.0.1 ping statistics --- 00:20:47.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.985 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3430082 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3430082 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 3430082 ']' 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:47.985 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:47.985 [2024-07-24 23:58:18.551305] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:20:47.985 [2024-07-24 23:58:18.551394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.985 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.242 [2024-07-24 23:58:18.616635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:48.242 [2024-07-24 23:58:18.723553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.242 [2024-07-24 23:58:18.723620] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.242 [2024-07-24 23:58:18.723639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.242 [2024-07-24 23:58:18.723651] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.243 [2024-07-24 23:58:18.723677] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.243 [2024-07-24 23:58:18.723729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.243 [2024-07-24 23:58:18.723787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.243 [2024-07-24 23:58:18.723834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:48.243 [2024-07-24 23:58:18.723836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.243 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:48.243 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:20:48.243 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:48.243 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.243 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.243 [2024-07-24 23:58:18.845363] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.243 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.243 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:48.243 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:48.243 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.502 Malloc0 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.502 [2024-07-24 23:58:18.916365] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:48.502 [ 00:20:48.502 { 00:20:48.502 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:48.502 "subtype": "Discovery", 00:20:48.502 "listen_addresses": [ 00:20:48.502 { 00:20:48.502 "trtype": "TCP", 00:20:48.502 "adrfam": "IPv4", 00:20:48.502 "traddr": "10.0.0.2", 00:20:48.502 "trsvcid": "4420" 00:20:48.502 } 00:20:48.502 ], 00:20:48.502 "allow_any_host": true, 00:20:48.502 "hosts": [] 00:20:48.502 }, 00:20:48.502 { 00:20:48.502 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.502 "subtype": "NVMe", 00:20:48.502 "listen_addresses": [ 00:20:48.502 { 00:20:48.502 "trtype": "TCP", 00:20:48.502 "adrfam": "IPv4", 00:20:48.502 "traddr": "10.0.0.2", 00:20:48.502 "trsvcid": "4420" 00:20:48.502 } 00:20:48.502 ], 00:20:48.502 "allow_any_host": true, 00:20:48.502 "hosts": [], 00:20:48.502 "serial_number": "SPDK00000000000001", 00:20:48.502 "model_number": "SPDK bdev Controller", 00:20:48.502 "max_namespaces": 32, 00:20:48.502 "min_cntlid": 1, 00:20:48.502 "max_cntlid": 65519, 00:20:48.502 "namespaces": [ 00:20:48.502 { 00:20:48.502 "nsid": 1, 00:20:48.502 "bdev_name": "Malloc0", 00:20:48.502 "name": "Malloc0", 00:20:48.502 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:48.502 "eui64": "ABCDEF0123456789", 00:20:48.502 "uuid": "6b0a6a27-fe59-4c0a-8cd6-caaa00775328" 00:20:48.502 } 00:20:48.502 ] 00:20:48.502 } 00:20:48.502 ] 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.502 23:58:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:48.502 [2024-07-24 23:58:18.954666] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:20:48.502 [2024-07-24 23:58:18.954705] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430105 ] 00:20:48.502 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.502 [2024-07-24 23:58:18.987505] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:48.502 [2024-07-24 23:58:18.987578] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:48.502 [2024-07-24 23:58:18.987588] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:48.502 [2024-07-24 23:58:18.987602] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:48.502 [2024-07-24 23:58:18.987614] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:48.502 [2024-07-24 23:58:18.987918] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:48.502 [2024-07-24 23:58:18.987973] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x871540 0 00:20:48.502 [2024-07-24 23:58:19.005251] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:48.502 [2024-07-24 23:58:19.005278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:48.502 [2024-07-24 23:58:19.005287] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:48.502 [2024-07-24 23:58:19.005293] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:48.502 [2024-07-24 23:58:19.005357] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.502 [2024-07-24 23:58:19.005369] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.502 [2024-07-24 23:58:19.005377] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871540) 00:20:48.502 [2024-07-24 23:58:19.005395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:48.502 [2024-07-24 23:58:19.005422] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d13c0, cid 0, qid 0 00:20:48.502 [2024-07-24 23:58:19.013257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.502 [2024-07-24 23:58:19.013274] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.502 [2024-07-24 23:58:19.013282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.502 [2024-07-24 23:58:19.013289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d13c0) on tqpair=0x871540 00:20:48.502 [2024-07-24 23:58:19.013303] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:48.502 [2024-07-24 23:58:19.013335] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:48.502 [2024-07-24 23:58:19.013345] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:48.502 [2024-07-24 23:58:19.013377] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.502 [2024-07-24 23:58:19.013386] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.502 [2024-07-24 23:58:19.013392] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871540) 00:20:48.502 [2024-07-24 23:58:19.013404] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.502 [2024-07-24 23:58:19.013428] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d13c0, cid 0, qid 0 00:20:48.502 [2024-07-24 23:58:19.013562] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.502 [2024-07-24 23:58:19.013574] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.502 [2024-07-24 23:58:19.013581] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.502 [2024-07-24 23:58:19.013588] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d13c0) on tqpair=0x871540 00:20:48.502 [2024-07-24 23:58:19.013601] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:48.502 [2024-07-24 23:58:19.013615] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:48.502 [2024-07-24 23:58:19.013627] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.502 [2024-07-24 23:58:19.013634] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.502 [2024-07-24 23:58:19.013640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871540) 00:20:48.502 [2024-07-24 23:58:19.013651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.502 [2024-07-24 23:58:19.013672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d13c0, cid 0, qid 0 00:20:48.502 [2024-07-24 23:58:19.013778] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.502 [2024-07-24 23:58:19.013791] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.502 [2024-07-24 23:58:19.013798] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.502 [2024-07-24 23:58:19.013804] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d13c0) on tqpair=0x871540 00:20:48.502 [2024-07-24 23:58:19.013813] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:48.503 [2024-07-24 23:58:19.013827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:48.503 [2024-07-24 23:58:19.013839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.013846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.013852] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871540) 00:20:48.503 [2024-07-24 23:58:19.013862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.503 [2024-07-24 23:58:19.013883] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d13c0, cid 0, qid 0 00:20:48.503 [2024-07-24 23:58:19.013992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.503 [2024-07-24 23:58:19.014007] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.503 [2024-07-24 23:58:19.014014] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.014021] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d13c0) on tqpair=0x871540 00:20:48.503 [2024-07-24 23:58:19.014029] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:48.503 [2024-07-24 23:58:19.014050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.014060] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.014066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871540) 00:20:48.503 [2024-07-24 23:58:19.014077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.503 [2024-07-24 23:58:19.014098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d13c0, cid 0, qid 0 00:20:48.503 [2024-07-24 23:58:19.014198] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.503 [2024-07-24 23:58:19.014210] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.503 [2024-07-24 23:58:19.014217] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.014224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d13c0) on tqpair=0x871540 00:20:48.503 [2024-07-24 23:58:19.014232] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:48.503 [2024-07-24 23:58:19.014248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:48.503 [2024-07-24 23:58:19.014264] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:48.503 [2024-07-24 23:58:19.014374] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:48.503 [2024-07-24 23:58:19.014382] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:48.503 [2024-07-24 23:58:19.014396] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.014404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.014410] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871540) 00:20:48.503 [2024-07-24 23:58:19.014420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.503 [2024-07-24 23:58:19.014442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d13c0, cid 0, qid 0 00:20:48.503 [2024-07-24 23:58:19.014552] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.503 [2024-07-24 23:58:19.014567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.503 [2024-07-24 23:58:19.014574] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.014581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d13c0) on tqpair=0x871540 00:20:48.503 [2024-07-24 23:58:19.014589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:48.503 [2024-07-24 23:58:19.014605] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.014614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.014620] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871540) 00:20:48.503 [2024-07-24 23:58:19.014631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.503 [2024-07-24 23:58:19.014651] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d13c0, cid 0, qid 0 00:20:48.503 [2024-07-24 23:58:19.014754] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.503 [2024-07-24 23:58:19.014766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.503 [2024-07-24 23:58:19.014773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.014780] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d13c0) on tqpair=0x871540 00:20:48.503 [2024-07-24 23:58:19.014791] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:48.503 [2024-07-24 23:58:19.014800] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:48.503 [2024-07-24 23:58:19.014813] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:48.503 [2024-07-24 23:58:19.014831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:48.503 [2024-07-24 23:58:19.014848] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.014856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871540) 00:20:48.503 [2024-07-24 23:58:19.014866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.503 [2024-07-24 23:58:19.014888] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d13c0, cid 0, qid 0 00:20:48.503 [2024-07-24 23:58:19.015020] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.503 [2024-07-24 23:58:19.015035] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.503 [2024-07-24 23:58:19.015042] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015049] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x871540): datao=0, datal=4096, cccid=0 00:20:48.503 [2024-07-24 23:58:19.015056] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8d13c0) on tqpair(0x871540): expected_datao=0, payload_size=4096 00:20:48.503 [2024-07-24 23:58:19.015064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015081] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015092] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015145] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.503 [2024-07-24 23:58:19.015156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.503 [2024-07-24 23:58:19.015163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015169] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d13c0) on tqpair=0x871540 00:20:48.503 [2024-07-24 23:58:19.015180] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:48.503 [2024-07-24 23:58:19.015189] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:48.503 [2024-07-24 23:58:19.015197] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:48.503 [2024-07-24 23:58:19.015205] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:48.503 [2024-07-24 23:58:19.015213] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:48.503 [2024-07-24 23:58:19.015221] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:48.503 [2024-07-24 23:58:19.015235] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:48.503 [2024-07-24 23:58:19.015260] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015276] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871540) 00:20:48.503 [2024-07-24 23:58:19.015287] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:48.503 [2024-07-24 23:58:19.015312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d13c0, cid 0, qid 0 00:20:48.503 [2024-07-24 23:58:19.015424] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.503 [2024-07-24 23:58:19.015436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.503 [2024-07-24 23:58:19.015443] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015450] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d13c0) on tqpair=0x871540 00:20:48.503 [2024-07-24 23:58:19.015461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x871540) 00:20:48.503 [2024-07-24 23:58:19.015485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.503 [2024-07-24 23:58:19.015495] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x871540) 00:20:48.503 [2024-07-24 23:58:19.015517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.503 [2024-07-24 23:58:19.015527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x871540) 00:20:48.503 [2024-07-24 23:58:19.015549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.503 [2024-07-24 23:58:19.015559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015566] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.503 [2024-07-24 23:58:19.015572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.504 [2024-07-24 23:58:19.015580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.504 [2024-07-24 23:58:19.015589] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:48.504 [2024-07-24 23:58:19.015608] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:48.504 [2024-07-24 23:58:19.015621] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.015628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x871540) 00:20:48.504 [2024-07-24 23:58:19.015639] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.504 [2024-07-24 23:58:19.015661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d13c0, cid 0, qid 0 00:20:48.504 [2024-07-24 23:58:19.015672] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1540, cid 1, qid 0 00:20:48.504 [2024-07-24 23:58:19.015679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d16c0, cid 2, qid 0 00:20:48.504 [2024-07-24 23:58:19.015687] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.504 [2024-07-24 23:58:19.015694] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d19c0, cid 4, qid 0 00:20:48.504 [2024-07-24 23:58:19.015831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.504 [2024-07-24 23:58:19.015846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.504 [2024-07-24 23:58:19.015853] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.015860] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d19c0) on tqpair=0x871540 00:20:48.504 [2024-07-24 23:58:19.015873] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:48.504 [2024-07-24 23:58:19.015882] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:48.504 [2024-07-24 23:58:19.015900] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.015909] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x871540) 00:20:48.504 [2024-07-24 23:58:19.015920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.504 [2024-07-24 23:58:19.015940] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d19c0, cid 4, qid 0 00:20:48.504 [2024-07-24 23:58:19.016058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.504 [2024-07-24 23:58:19.016070] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.504 [2024-07-24 23:58:19.016077] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.016084] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x871540): datao=0, datal=4096, cccid=4 00:20:48.504 [2024-07-24 23:58:19.016091] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8d19c0) on tqpair(0x871540): expected_datao=0, payload_size=4096 00:20:48.504 [2024-07-24 23:58:19.016098] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.016114] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.016124] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.056335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.504 [2024-07-24 23:58:19.056354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.504 [2024-07-24 23:58:19.056362] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.056369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d19c0) on tqpair=0x871540 00:20:48.504 [2024-07-24 23:58:19.056388] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:48.504 [2024-07-24 23:58:19.056426] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.056437] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x871540) 00:20:48.504 [2024-07-24 23:58:19.056449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.504 [2024-07-24 23:58:19.056461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.056468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.056474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x871540) 00:20:48.504 [2024-07-24 23:58:19.056483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.504 [2024-07-24 23:58:19.056511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d19c0, cid 4, qid 0 00:20:48.504 [2024-07-24 23:58:19.056523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1b40, cid 5, qid 0 00:20:48.504 [2024-07-24 23:58:19.056667] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.504 [2024-07-24 23:58:19.056679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.504 [2024-07-24 23:58:19.056686] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.056692] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x871540): datao=0, datal=1024, cccid=4 00:20:48.504 [2024-07-24 23:58:19.056700] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8d19c0) on tqpair(0x871540): expected_datao=0, payload_size=1024 00:20:48.504 [2024-07-24 23:58:19.056707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.056721] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.056730] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.056739] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.504 [2024-07-24 23:58:19.056748] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.504 [2024-07-24 23:58:19.056754] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.056761] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1b40) on tqpair=0x871540 00:20:48.504 [2024-07-24 23:58:19.097344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.504 [2024-07-24 23:58:19.097364] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.504 [2024-07-24 23:58:19.097372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.097379] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d19c0) on tqpair=0x871540 00:20:48.504 [2024-07-24 23:58:19.097396] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.097406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x871540) 00:20:48.504 [2024-07-24 23:58:19.097418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.504 [2024-07-24 23:58:19.097448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d19c0, cid 4, qid 0 00:20:48.504 [2024-07-24 23:58:19.097571] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.504 [2024-07-24 23:58:19.097586] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.504 [2024-07-24 23:58:19.097593] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.097600] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x871540): datao=0, datal=3072, cccid=4 00:20:48.504 [2024-07-24 23:58:19.097607] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8d19c0) on tqpair(0x871540): expected_datao=0, payload_size=3072 00:20:48.504 [2024-07-24 23:58:19.097615] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.097641] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.097651] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.097714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.504 [2024-07-24 23:58:19.097726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.504 [2024-07-24 23:58:19.097733] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.097739] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d19c0) on tqpair=0x871540 00:20:48.504 [2024-07-24 23:58:19.097754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.097762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x871540) 00:20:48.504 [2024-07-24 23:58:19.097773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.504 [2024-07-24 23:58:19.097801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d19c0, cid 4, qid 0 00:20:48.504 [2024-07-24 23:58:19.097946] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.504 [2024-07-24 23:58:19.097958] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.504 [2024-07-24 23:58:19.097965] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.097972] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x871540): datao=0, datal=8, cccid=4 00:20:48.504 [2024-07-24 23:58:19.097979] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8d19c0) on tqpair(0x871540): expected_datao=0, payload_size=8 00:20:48.504 [2024-07-24 23:58:19.097987] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.097996] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.504 [2024-07-24 23:58:19.098008] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.766 [2024-07-24 23:58:19.141262] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.766 [2024-07-24 23:58:19.141285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.766 [2024-07-24 23:58:19.141293] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.766 [2024-07-24 23:58:19.141300] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d19c0) on tqpair=0x871540 00:20:48.766 ===================================================== 00:20:48.766 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:48.766 ===================================================== 00:20:48.766 Controller Capabilities/Features 00:20:48.766 ================================ 00:20:48.766 Vendor ID: 0000 00:20:48.766 Subsystem Vendor ID: 0000 00:20:48.766 Serial Number: .................... 00:20:48.766 Model Number: ........................................ 00:20:48.766 Firmware Version: 24.09 00:20:48.766 Recommended Arb Burst: 0 00:20:48.766 IEEE OUI Identifier: 00 00 00 00:20:48.766 Multi-path I/O 00:20:48.766 May have multiple subsystem ports: No 00:20:48.766 May have multiple controllers: No 00:20:48.766 Associated with SR-IOV VF: No 00:20:48.766 Max Data Transfer Size: 131072 00:20:48.766 Max Number of Namespaces: 0 00:20:48.766 Max Number of I/O Queues: 1024 00:20:48.766 NVMe Specification Version (VS): 1.3 00:20:48.766 NVMe Specification Version (Identify): 1.3 00:20:48.766 Maximum Queue Entries: 128 00:20:48.766 Contiguous Queues Required: Yes 00:20:48.766 Arbitration Mechanisms Supported 00:20:48.766 Weighted Round Robin: Not Supported 00:20:48.766 Vendor Specific: Not Supported 00:20:48.766 Reset Timeout: 15000 ms 00:20:48.766 Doorbell Stride: 4 bytes 00:20:48.766 NVM Subsystem Reset: Not Supported 00:20:48.766 Command Sets Supported 00:20:48.766 NVM Command Set: Supported 00:20:48.766 Boot Partition: Not Supported 00:20:48.766 Memory Page Size Minimum: 4096 bytes 00:20:48.766 Memory Page Size Maximum: 4096 bytes 00:20:48.766 Persistent Memory Region: Not Supported 00:20:48.766 Optional Asynchronous Events Supported 00:20:48.766 Namespace Attribute Notices: Not Supported 00:20:48.766 Firmware Activation Notices: Not Supported 00:20:48.766 ANA Change Notices: Not Supported 00:20:48.766 PLE Aggregate Log Change Notices: Not Supported 00:20:48.766 LBA Status Info Alert Notices: Not Supported 00:20:48.766 EGE Aggregate Log Change Notices: Not Supported 00:20:48.766 Normal NVM Subsystem Shutdown event: Not Supported 00:20:48.766 Zone Descriptor Change Notices: Not Supported 00:20:48.766 Discovery Log Change Notices: Supported 00:20:48.766 Controller Attributes 00:20:48.766 128-bit Host Identifier: Not Supported 00:20:48.766 Non-Operational Permissive Mode: Not Supported 00:20:48.766 NVM Sets: Not Supported 00:20:48.766 Read Recovery Levels: Not Supported 00:20:48.766 Endurance Groups: Not Supported 00:20:48.766 Predictable Latency Mode: Not Supported 00:20:48.766 Traffic Based Keep ALive: Not Supported 00:20:48.766 Namespace Granularity: Not Supported 00:20:48.766 SQ Associations: Not Supported 00:20:48.766 UUID List: Not Supported 00:20:48.766 Multi-Domain Subsystem: Not Supported 00:20:48.766 Fixed Capacity Management: Not Supported 00:20:48.766 Variable Capacity Management: Not Supported 00:20:48.766 Delete Endurance Group: Not Supported 00:20:48.766 Delete NVM Set: Not Supported 00:20:48.766 Extended LBA Formats Supported: Not Supported 00:20:48.766 Flexible Data Placement Supported: Not Supported 00:20:48.766 00:20:48.766 Controller Memory Buffer Support 00:20:48.766 ================================ 00:20:48.766 Supported: No 00:20:48.766 00:20:48.766 Persistent Memory Region Support 00:20:48.766 ================================ 00:20:48.766 Supported: No 00:20:48.766 00:20:48.766 Admin Command Set Attributes 00:20:48.766 ============================ 00:20:48.766 Security Send/Receive: Not Supported 00:20:48.766 Format NVM: Not Supported 00:20:48.766 Firmware Activate/Download: Not Supported 00:20:48.766 Namespace Management: Not Supported 00:20:48.766 Device Self-Test: Not Supported 00:20:48.766 Directives: Not Supported 00:20:48.766 NVMe-MI: Not Supported 00:20:48.766 Virtualization Management: Not Supported 00:20:48.766 Doorbell Buffer Config: Not Supported 00:20:48.766 Get LBA Status Capability: Not Supported 00:20:48.766 Command & Feature Lockdown Capability: Not Supported 00:20:48.766 Abort Command Limit: 1 00:20:48.766 Async Event Request Limit: 4 00:20:48.766 Number of Firmware Slots: N/A 00:20:48.766 Firmware Slot 1 Read-Only: N/A 00:20:48.766 Firmware Activation Without Reset: N/A 00:20:48.766 Multiple Update Detection Support: N/A 00:20:48.766 Firmware Update Granularity: No Information Provided 00:20:48.766 Per-Namespace SMART Log: No 00:20:48.766 Asymmetric Namespace Access Log Page: Not Supported 00:20:48.766 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:48.766 Command Effects Log Page: Not Supported 00:20:48.766 Get Log Page Extended Data: Supported 00:20:48.766 Telemetry Log Pages: Not Supported 00:20:48.766 Persistent Event Log Pages: Not Supported 00:20:48.766 Supported Log Pages Log Page: May Support 00:20:48.766 Commands Supported & Effects Log Page: Not Supported 00:20:48.766 Feature Identifiers & Effects Log Page:May Support 00:20:48.766 NVMe-MI Commands & Effects Log Page: May Support 00:20:48.766 Data Area 4 for Telemetry Log: Not Supported 00:20:48.766 Error Log Page Entries Supported: 128 00:20:48.766 Keep Alive: Not Supported 00:20:48.766 00:20:48.766 NVM Command Set Attributes 00:20:48.766 ========================== 00:20:48.766 Submission Queue Entry Size 00:20:48.766 Max: 1 00:20:48.766 Min: 1 00:20:48.766 Completion Queue Entry Size 00:20:48.766 Max: 1 00:20:48.766 Min: 1 00:20:48.766 Number of Namespaces: 0 00:20:48.766 Compare Command: Not Supported 00:20:48.766 Write Uncorrectable Command: Not Supported 00:20:48.766 Dataset Management Command: Not Supported 00:20:48.766 Write Zeroes Command: Not Supported 00:20:48.766 Set Features Save Field: Not Supported 00:20:48.766 Reservations: Not Supported 00:20:48.766 Timestamp: Not Supported 00:20:48.766 Copy: Not Supported 00:20:48.766 Volatile Write Cache: Not Present 00:20:48.766 Atomic Write Unit (Normal): 1 00:20:48.766 Atomic Write Unit (PFail): 1 00:20:48.766 Atomic Compare & Write Unit: 1 00:20:48.766 Fused Compare & Write: Supported 00:20:48.766 Scatter-Gather List 00:20:48.766 SGL Command Set: Supported 00:20:48.766 SGL Keyed: Supported 00:20:48.766 SGL Bit Bucket Descriptor: Not Supported 00:20:48.766 SGL Metadata Pointer: Not Supported 00:20:48.766 Oversized SGL: Not Supported 00:20:48.766 SGL Metadata Address: Not Supported 00:20:48.766 SGL Offset: Supported 00:20:48.766 Transport SGL Data Block: Not Supported 00:20:48.766 Replay Protected Memory Block: Not Supported 00:20:48.766 00:20:48.766 Firmware Slot Information 00:20:48.766 ========================= 00:20:48.766 Active slot: 0 00:20:48.766 00:20:48.766 00:20:48.766 Error Log 00:20:48.766 ========= 00:20:48.766 00:20:48.766 Active Namespaces 00:20:48.766 ================= 00:20:48.766 Discovery Log Page 00:20:48.766 ================== 00:20:48.766 Generation Counter: 2 00:20:48.766 Number of Records: 2 00:20:48.766 Record Format: 0 00:20:48.766 00:20:48.766 Discovery Log Entry 0 00:20:48.766 ---------------------- 00:20:48.766 Transport Type: 3 (TCP) 00:20:48.766 Address Family: 1 (IPv4) 00:20:48.766 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:48.766 Entry Flags: 00:20:48.766 Duplicate Returned Information: 1 00:20:48.766 Explicit Persistent Connection Support for Discovery: 1 00:20:48.766 Transport Requirements: 00:20:48.766 Secure Channel: Not Required 00:20:48.766 Port ID: 0 (0x0000) 00:20:48.766 Controller ID: 65535 (0xffff) 00:20:48.766 Admin Max SQ Size: 128 00:20:48.766 Transport Service Identifier: 4420 00:20:48.766 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:48.766 Transport Address: 10.0.0.2 00:20:48.766 Discovery Log Entry 1 00:20:48.766 ---------------------- 00:20:48.766 Transport Type: 3 (TCP) 00:20:48.766 Address Family: 1 (IPv4) 00:20:48.766 Subsystem Type: 2 (NVM Subsystem) 00:20:48.766 Entry Flags: 00:20:48.766 Duplicate Returned Information: 0 00:20:48.766 Explicit Persistent Connection Support for Discovery: 0 00:20:48.767 Transport Requirements: 00:20:48.767 Secure Channel: Not Required 00:20:48.767 Port ID: 0 (0x0000) 00:20:48.767 Controller ID: 65535 (0xffff) 00:20:48.767 Admin Max SQ Size: 128 00:20:48.767 Transport Service Identifier: 4420 00:20:48.767 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:48.767 Transport Address: 10.0.0.2 [2024-07-24 23:58:19.141415] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:48.767 [2024-07-24 23:58:19.141436] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d13c0) on tqpair=0x871540 00:20:48.767 [2024-07-24 23:58:19.141448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.767 [2024-07-24 23:58:19.141457] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1540) on tqpair=0x871540 00:20:48.767 [2024-07-24 23:58:19.141464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.767 [2024-07-24 23:58:19.141472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d16c0) on tqpair=0x871540 00:20:48.767 [2024-07-24 23:58:19.141480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.767 [2024-07-24 23:58:19.141488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.767 [2024-07-24 23:58:19.141495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:48.767 [2024-07-24 23:58:19.141513] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.141522] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.141529] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.767 [2024-07-24 23:58:19.141540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.767 [2024-07-24 23:58:19.141579] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.767 [2024-07-24 23:58:19.141714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.767 [2024-07-24 23:58:19.141727] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.767 [2024-07-24 23:58:19.141734] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.141741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.767 [2024-07-24 23:58:19.141752] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.141760] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.141767] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.767 [2024-07-24 23:58:19.141777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.767 [2024-07-24 23:58:19.141804] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.767 [2024-07-24 23:58:19.141923] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.767 [2024-07-24 23:58:19.141939] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.767 [2024-07-24 23:58:19.141946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.141952] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.767 [2024-07-24 23:58:19.141961] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:48.767 [2024-07-24 23:58:19.141969] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:48.767 [2024-07-24 23:58:19.141989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.141999] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.142005] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.767 [2024-07-24 23:58:19.142016] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.767 [2024-07-24 23:58:19.142037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.767 [2024-07-24 23:58:19.142157] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.767 [2024-07-24 23:58:19.142169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.767 [2024-07-24 23:58:19.142176] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.142183] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.767 [2024-07-24 23:58:19.142199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.142209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.142215] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.767 [2024-07-24 23:58:19.142226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.767 [2024-07-24 23:58:19.142255] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.767 [2024-07-24 23:58:19.142363] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.767 [2024-07-24 23:58:19.142377] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.767 [2024-07-24 23:58:19.142384] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.142391] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.767 [2024-07-24 23:58:19.142407] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.142416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.142422] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.767 [2024-07-24 23:58:19.142433] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.767 [2024-07-24 23:58:19.142453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.767 [2024-07-24 23:58:19.142557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.767 [2024-07-24 23:58:19.142572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.767 [2024-07-24 23:58:19.142579] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.142585] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.767 [2024-07-24 23:58:19.142602] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.142611] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.142618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.767 [2024-07-24 23:58:19.142628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.767 [2024-07-24 23:58:19.142649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.767 [2024-07-24 23:58:19.142755] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.767 [2024-07-24 23:58:19.142770] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.767 [2024-07-24 23:58:19.142777] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.142784] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.767 [2024-07-24 23:58:19.142800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.142809] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.142820] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.767 [2024-07-24 23:58:19.142831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.767 [2024-07-24 23:58:19.142852] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.767 [2024-07-24 23:58:19.142970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.767 [2024-07-24 23:58:19.142983] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.767 [2024-07-24 23:58:19.142990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.142996] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.767 [2024-07-24 23:58:19.143012] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.143022] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.143028] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.767 [2024-07-24 23:58:19.143038] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.767 [2024-07-24 23:58:19.143059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.767 [2024-07-24 23:58:19.143161] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.767 [2024-07-24 23:58:19.143176] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.767 [2024-07-24 23:58:19.143183] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.143189] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.767 [2024-07-24 23:58:19.143205] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.143215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.143221] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.767 [2024-07-24 23:58:19.143232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.767 [2024-07-24 23:58:19.143261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.767 [2024-07-24 23:58:19.143365] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.767 [2024-07-24 23:58:19.143381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.767 [2024-07-24 23:58:19.143387] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.143394] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.767 [2024-07-24 23:58:19.143410] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.143420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.767 [2024-07-24 23:58:19.143426] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.767 [2024-07-24 23:58:19.143437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.767 [2024-07-24 23:58:19.143458] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.767 [2024-07-24 23:58:19.143568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.768 [2024-07-24 23:58:19.143583] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.768 [2024-07-24 23:58:19.143590] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.143597] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.768 [2024-07-24 23:58:19.143613] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.143622] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.143628] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.768 [2024-07-24 23:58:19.143642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.768 [2024-07-24 23:58:19.143664] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.768 [2024-07-24 23:58:19.143765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.768 [2024-07-24 23:58:19.143777] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.768 [2024-07-24 23:58:19.143784] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.143791] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.768 [2024-07-24 23:58:19.143807] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.143816] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.143823] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.768 [2024-07-24 23:58:19.143833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.768 [2024-07-24 23:58:19.143853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.768 [2024-07-24 23:58:19.143951] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.768 [2024-07-24 23:58:19.143963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.768 [2024-07-24 23:58:19.143970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.143977] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.768 [2024-07-24 23:58:19.143992] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144001] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.768 [2024-07-24 23:58:19.144018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.768 [2024-07-24 23:58:19.144038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.768 [2024-07-24 23:58:19.144135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.768 [2024-07-24 23:58:19.144147] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.768 [2024-07-24 23:58:19.144154] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.768 [2024-07-24 23:58:19.144176] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144192] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.768 [2024-07-24 23:58:19.144203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.768 [2024-07-24 23:58:19.144223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.768 [2024-07-24 23:58:19.144333] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.768 [2024-07-24 23:58:19.144348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.768 [2024-07-24 23:58:19.144355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.768 [2024-07-24 23:58:19.144378] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144388] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144394] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.768 [2024-07-24 23:58:19.144405] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.768 [2024-07-24 23:58:19.144430] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.768 [2024-07-24 23:58:19.144529] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.768 [2024-07-24 23:58:19.144541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.768 [2024-07-24 23:58:19.144548] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144555] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.768 [2024-07-24 23:58:19.144571] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.768 [2024-07-24 23:58:19.144597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.768 [2024-07-24 23:58:19.144618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.768 [2024-07-24 23:58:19.144717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.768 [2024-07-24 23:58:19.144732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.768 [2024-07-24 23:58:19.144738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.768 [2024-07-24 23:58:19.144761] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144770] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144777] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.768 [2024-07-24 23:58:19.144787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.768 [2024-07-24 23:58:19.144808] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.768 [2024-07-24 23:58:19.144916] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.768 [2024-07-24 23:58:19.144931] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.768 [2024-07-24 23:58:19.144938] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.768 [2024-07-24 23:58:19.144961] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144970] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.144977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.768 [2024-07-24 23:58:19.144987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.768 [2024-07-24 23:58:19.145008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.768 [2024-07-24 23:58:19.145113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.768 [2024-07-24 23:58:19.145125] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.768 [2024-07-24 23:58:19.145132] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.145138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.768 [2024-07-24 23:58:19.145154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.145163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.145170] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.768 [2024-07-24 23:58:19.145180] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.768 [2024-07-24 23:58:19.145205] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.768 [2024-07-24 23:58:19.149263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.768 [2024-07-24 23:58:19.149281] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.768 [2024-07-24 23:58:19.149288] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.149295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.768 [2024-07-24 23:58:19.149312] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.149322] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.149328] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x871540) 00:20:48.768 [2024-07-24 23:58:19.149339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.768 [2024-07-24 23:58:19.149361] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8d1840, cid 3, qid 0 00:20:48.768 [2024-07-24 23:58:19.149485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.768 [2024-07-24 23:58:19.149500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.768 [2024-07-24 23:58:19.149507] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.768 [2024-07-24 23:58:19.149514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8d1840) on tqpair=0x871540 00:20:48.768 [2024-07-24 23:58:19.149527] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:48.768 00:20:48.768 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:48.768 [2024-07-24 23:58:19.185979] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:20:48.768 [2024-07-24 23:58:19.186023] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3430216 ] 00:20:48.768 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.768 [2024-07-24 23:58:19.220001] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:48.768 [2024-07-24 23:58:19.220050] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:48.768 [2024-07-24 23:58:19.220060] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:48.769 [2024-07-24 23:58:19.220075] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:48.769 [2024-07-24 23:58:19.220087] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:48.769 [2024-07-24 23:58:19.220357] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:48.769 [2024-07-24 23:58:19.220396] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1d94540 0 00:20:48.769 [2024-07-24 23:58:19.235266] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:48.769 [2024-07-24 23:58:19.235290] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:48.769 [2024-07-24 23:58:19.235300] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:48.769 [2024-07-24 23:58:19.235306] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:48.769 [2024-07-24 23:58:19.235345] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.235357] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.235370] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d94540) 00:20:48.769 [2024-07-24 23:58:19.235385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:48.769 [2024-07-24 23:58:19.235411] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df43c0, cid 0, qid 0 00:20:48.769 [2024-07-24 23:58:19.243257] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.769 [2024-07-24 23:58:19.243275] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.769 [2024-07-24 23:58:19.243282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.243289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df43c0) on tqpair=0x1d94540 00:20:48.769 [2024-07-24 23:58:19.243303] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:48.769 [2024-07-24 23:58:19.243316] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:48.769 [2024-07-24 23:58:19.243326] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:48.769 [2024-07-24 23:58:19.243344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.243353] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.243359] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d94540) 00:20:48.769 [2024-07-24 23:58:19.243371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.769 [2024-07-24 23:58:19.243394] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df43c0, cid 0, qid 0 00:20:48.769 [2024-07-24 23:58:19.243556] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.769 [2024-07-24 23:58:19.243572] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.769 [2024-07-24 23:58:19.243579] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.243586] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df43c0) on tqpair=0x1d94540 00:20:48.769 [2024-07-24 23:58:19.243598] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:48.769 [2024-07-24 23:58:19.243614] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:48.769 [2024-07-24 23:58:19.243629] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.243637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.243644] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d94540) 00:20:48.769 [2024-07-24 23:58:19.243655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.769 [2024-07-24 23:58:19.243677] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df43c0, cid 0, qid 0 00:20:48.769 [2024-07-24 23:58:19.243786] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.769 [2024-07-24 23:58:19.243804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.769 [2024-07-24 23:58:19.243812] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.243818] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df43c0) on tqpair=0x1d94540 00:20:48.769 [2024-07-24 23:58:19.243827] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:48.769 [2024-07-24 23:58:19.243841] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:48.769 [2024-07-24 23:58:19.243856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.243865] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.243871] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d94540) 00:20:48.769 [2024-07-24 23:58:19.243886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.769 [2024-07-24 23:58:19.243910] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df43c0, cid 0, qid 0 00:20:48.769 [2024-07-24 23:58:19.244028] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.769 [2024-07-24 23:58:19.244043] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.769 [2024-07-24 23:58:19.244050] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.244057] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df43c0) on tqpair=0x1d94540 00:20:48.769 [2024-07-24 23:58:19.244065] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:48.769 [2024-07-24 23:58:19.244084] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.244095] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.244101] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d94540) 00:20:48.769 [2024-07-24 23:58:19.244112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.769 [2024-07-24 23:58:19.244134] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df43c0, cid 0, qid 0 00:20:48.769 [2024-07-24 23:58:19.244239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.769 [2024-07-24 23:58:19.244264] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.769 [2024-07-24 23:58:19.244272] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.244279] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df43c0) on tqpair=0x1d94540 00:20:48.769 [2024-07-24 23:58:19.244286] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:48.769 [2024-07-24 23:58:19.244294] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:48.769 [2024-07-24 23:58:19.244309] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:48.769 [2024-07-24 23:58:19.244421] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:48.769 [2024-07-24 23:58:19.244428] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:48.769 [2024-07-24 23:58:19.244441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.244449] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.244456] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d94540) 00:20:48.769 [2024-07-24 23:58:19.244481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.769 [2024-07-24 23:58:19.244503] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df43c0, cid 0, qid 0 00:20:48.769 [2024-07-24 23:58:19.244673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.769 [2024-07-24 23:58:19.244691] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.769 [2024-07-24 23:58:19.244699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.244706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df43c0) on tqpair=0x1d94540 00:20:48.769 [2024-07-24 23:58:19.244714] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:48.769 [2024-07-24 23:58:19.244731] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.244743] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.244750] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d94540) 00:20:48.769 [2024-07-24 23:58:19.244765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.769 [2024-07-24 23:58:19.244788] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df43c0, cid 0, qid 0 00:20:48.769 [2024-07-24 23:58:19.244905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.769 [2024-07-24 23:58:19.244922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.769 [2024-07-24 23:58:19.244929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.244935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df43c0) on tqpair=0x1d94540 00:20:48.769 [2024-07-24 23:58:19.244943] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:48.769 [2024-07-24 23:58:19.244951] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:48.769 [2024-07-24 23:58:19.244966] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:48.769 [2024-07-24 23:58:19.244982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:48.769 [2024-07-24 23:58:19.244997] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.245005] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d94540) 00:20:48.769 [2024-07-24 23:58:19.245016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.769 [2024-07-24 23:58:19.245038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df43c0, cid 0, qid 0 00:20:48.769 [2024-07-24 23:58:19.245200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.769 [2024-07-24 23:58:19.245220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.769 [2024-07-24 23:58:19.245232] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.245252] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d94540): datao=0, datal=4096, cccid=0 00:20:48.769 [2024-07-24 23:58:19.245264] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1df43c0) on tqpair(0x1d94540): expected_datao=0, payload_size=4096 00:20:48.769 [2024-07-24 23:58:19.245271] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.769 [2024-07-24 23:58:19.245291] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.245300] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.285413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.770 [2024-07-24 23:58:19.285433] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.770 [2024-07-24 23:58:19.285441] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.285448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df43c0) on tqpair=0x1d94540 00:20:48.770 [2024-07-24 23:58:19.285459] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:48.770 [2024-07-24 23:58:19.285468] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:48.770 [2024-07-24 23:58:19.285476] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:48.770 [2024-07-24 23:58:19.285483] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:48.770 [2024-07-24 23:58:19.285491] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:48.770 [2024-07-24 23:58:19.285499] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:48.770 [2024-07-24 23:58:19.285518] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:48.770 [2024-07-24 23:58:19.285539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.285548] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.285555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d94540) 00:20:48.770 [2024-07-24 23:58:19.285567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:48.770 [2024-07-24 23:58:19.285591] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df43c0, cid 0, qid 0 00:20:48.770 [2024-07-24 23:58:19.285705] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.770 [2024-07-24 23:58:19.285723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.770 [2024-07-24 23:58:19.285732] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.285739] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df43c0) on tqpair=0x1d94540 00:20:48.770 [2024-07-24 23:58:19.285750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.285758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.285764] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1d94540) 00:20:48.770 [2024-07-24 23:58:19.285775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.770 [2024-07-24 23:58:19.285785] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.285792] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.285798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1d94540) 00:20:48.770 [2024-07-24 23:58:19.285808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.770 [2024-07-24 23:58:19.285818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.285825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.285831] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1d94540) 00:20:48.770 [2024-07-24 23:58:19.285840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.770 [2024-07-24 23:58:19.285850] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.285857] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.285863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d94540) 00:20:48.770 [2024-07-24 23:58:19.285872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.770 [2024-07-24 23:58:19.285881] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:48.770 [2024-07-24 23:58:19.285916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:48.770 [2024-07-24 23:58:19.285931] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.285938] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d94540) 00:20:48.770 [2024-07-24 23:58:19.285949] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.770 [2024-07-24 23:58:19.285987] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df43c0, cid 0, qid 0 00:20:48.770 [2024-07-24 23:58:19.285997] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4540, cid 1, qid 0 00:20:48.770 [2024-07-24 23:58:19.286005] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df46c0, cid 2, qid 0 00:20:48.770 [2024-07-24 23:58:19.286015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4840, cid 3, qid 0 00:20:48.770 [2024-07-24 23:58:19.286039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df49c0, cid 4, qid 0 00:20:48.770 [2024-07-24 23:58:19.286231] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.770 [2024-07-24 23:58:19.286254] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.770 [2024-07-24 23:58:19.286263] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.286270] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df49c0) on tqpair=0x1d94540 00:20:48.770 [2024-07-24 23:58:19.286278] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:48.770 [2024-07-24 23:58:19.286287] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:48.770 [2024-07-24 23:58:19.286308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:48.770 [2024-07-24 23:58:19.286324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:48.770 [2024-07-24 23:58:19.286335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.286343] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.286350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d94540) 00:20:48.770 [2024-07-24 23:58:19.286361] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:48.770 [2024-07-24 23:58:19.286383] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df49c0, cid 4, qid 0 00:20:48.770 [2024-07-24 23:58:19.286523] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.770 [2024-07-24 23:58:19.286541] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.770 [2024-07-24 23:58:19.286549] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.286556] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df49c0) on tqpair=0x1d94540 00:20:48.770 [2024-07-24 23:58:19.286625] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:48.770 [2024-07-24 23:58:19.286647] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:48.770 [2024-07-24 23:58:19.286664] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.286673] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d94540) 00:20:48.770 [2024-07-24 23:58:19.286684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.770 [2024-07-24 23:58:19.286721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df49c0, cid 4, qid 0 00:20:48.770 [2024-07-24 23:58:19.286920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.770 [2024-07-24 23:58:19.286938] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.770 [2024-07-24 23:58:19.286950] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.286960] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d94540): datao=0, datal=4096, cccid=4 00:20:48.770 [2024-07-24 23:58:19.286971] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1df49c0) on tqpair(0x1d94540): expected_datao=0, payload_size=4096 00:20:48.770 [2024-07-24 23:58:19.286984] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.287006] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.287016] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.770 [2024-07-24 23:58:19.330258] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.771 [2024-07-24 23:58:19.330278] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.771 [2024-07-24 23:58:19.330286] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.330293] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df49c0) on tqpair=0x1d94540 00:20:48.771 [2024-07-24 23:58:19.330315] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:48.771 [2024-07-24 23:58:19.330333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:48.771 [2024-07-24 23:58:19.330353] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:48.771 [2024-07-24 23:58:19.330370] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.330378] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d94540) 00:20:48.771 [2024-07-24 23:58:19.330390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.771 [2024-07-24 23:58:19.330414] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df49c0, cid 4, qid 0 00:20:48.771 [2024-07-24 23:58:19.330577] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.771 [2024-07-24 23:58:19.330596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.771 [2024-07-24 23:58:19.330608] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.330618] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d94540): datao=0, datal=4096, cccid=4 00:20:48.771 [2024-07-24 23:58:19.330630] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1df49c0) on tqpair(0x1d94540): expected_datao=0, payload_size=4096 00:20:48.771 [2024-07-24 23:58:19.330643] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.330664] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.330673] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.371438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.771 [2024-07-24 23:58:19.371460] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.771 [2024-07-24 23:58:19.371469] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.371476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df49c0) on tqpair=0x1d94540 00:20:48.771 [2024-07-24 23:58:19.371500] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:48.771 [2024-07-24 23:58:19.371522] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:48.771 [2024-07-24 23:58:19.371540] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.371548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d94540) 00:20:48.771 [2024-07-24 23:58:19.371560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.771 [2024-07-24 23:58:19.371585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df49c0, cid 4, qid 0 00:20:48.771 [2024-07-24 23:58:19.371709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:48.771 [2024-07-24 23:58:19.371728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:48.771 [2024-07-24 23:58:19.371735] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.371746] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d94540): datao=0, datal=4096, cccid=4 00:20:48.771 [2024-07-24 23:58:19.371759] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1df49c0) on tqpair(0x1d94540): expected_datao=0, payload_size=4096 00:20:48.771 [2024-07-24 23:58:19.371777] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.371796] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.371807] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.371819] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.771 [2024-07-24 23:58:19.371829] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.771 [2024-07-24 23:58:19.371836] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.371843] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df49c0) on tqpair=0x1d94540 00:20:48.771 [2024-07-24 23:58:19.371857] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:48.771 [2024-07-24 23:58:19.371873] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:48.771 [2024-07-24 23:58:19.371891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:48.771 [2024-07-24 23:58:19.371905] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:48.771 [2024-07-24 23:58:19.371914] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:48.771 [2024-07-24 23:58:19.371923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:48.771 [2024-07-24 23:58:19.371931] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:48.771 [2024-07-24 23:58:19.371939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:48.771 [2024-07-24 23:58:19.371948] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:48.771 [2024-07-24 23:58:19.371968] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.371977] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d94540) 00:20:48.771 [2024-07-24 23:58:19.372007] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.771 [2024-07-24 23:58:19.372019] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.372026] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.372032] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d94540) 00:20:48.771 [2024-07-24 23:58:19.372041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:48.771 [2024-07-24 23:58:19.372082] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df49c0, cid 4, qid 0 00:20:48.771 [2024-07-24 23:58:19.372095] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4b40, cid 5, qid 0 00:20:48.771 [2024-07-24 23:58:19.372296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.771 [2024-07-24 23:58:19.372313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.771 [2024-07-24 23:58:19.372320] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.372327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df49c0) on tqpair=0x1d94540 00:20:48.771 [2024-07-24 23:58:19.372338] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.771 [2024-07-24 23:58:19.372353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.771 [2024-07-24 23:58:19.372366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.372378] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4b40) on tqpair=0x1d94540 00:20:48.771 [2024-07-24 23:58:19.372409] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.372427] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d94540) 00:20:48.771 [2024-07-24 23:58:19.372446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.771 [2024-07-24 23:58:19.372483] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4b40, cid 5, qid 0 00:20:48.771 [2024-07-24 23:58:19.372633] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.771 [2024-07-24 23:58:19.372653] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.771 [2024-07-24 23:58:19.372661] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.372668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4b40) on tqpair=0x1d94540 00:20:48.771 [2024-07-24 23:58:19.372685] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.372697] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d94540) 00:20:48.771 [2024-07-24 23:58:19.372709] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.771 [2024-07-24 23:58:19.372733] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4b40, cid 5, qid 0 00:20:48.771 [2024-07-24 23:58:19.372848] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.771 [2024-07-24 23:58:19.372865] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.771 [2024-07-24 23:58:19.372871] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.372878] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4b40) on tqpair=0x1d94540 00:20:48.771 [2024-07-24 23:58:19.372897] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.372908] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d94540) 00:20:48.771 [2024-07-24 23:58:19.372919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.771 [2024-07-24 23:58:19.372941] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4b40, cid 5, qid 0 00:20:48.771 [2024-07-24 23:58:19.373046] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:48.771 [2024-07-24 23:58:19.373064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:48.771 [2024-07-24 23:58:19.373072] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.373079] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4b40) on tqpair=0x1d94540 00:20:48.771 [2024-07-24 23:58:19.373105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.373119] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1d94540) 00:20:48.771 [2024-07-24 23:58:19.373131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.771 [2024-07-24 23:58:19.373145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.771 [2024-07-24 23:58:19.373152] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1d94540) 00:20:48.771 [2024-07-24 23:58:19.373162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.771 [2024-07-24 23:58:19.373174] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.772 [2024-07-24 23:58:19.373182] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1d94540) 00:20:48.772 [2024-07-24 23:58:19.373191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:48.772 [2024-07-24 23:58:19.373208] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:48.772 [2024-07-24 23:58:19.373216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d94540) 00:20:49.030 [2024-07-24 23:58:19.377267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.030 [2024-07-24 23:58:19.377301] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4b40, cid 5, qid 0 00:20:49.030 [2024-07-24 23:58:19.377314] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df49c0, cid 4, qid 0 00:20:49.030 [2024-07-24 23:58:19.377337] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4cc0, cid 6, qid 0 00:20:49.030 [2024-07-24 23:58:19.377345] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4e40, cid 7, qid 0 00:20:49.030 [2024-07-24 23:58:19.377568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.030 [2024-07-24 23:58:19.377585] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.030 [2024-07-24 23:58:19.377592] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377599] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d94540): datao=0, datal=8192, cccid=5 00:20:49.030 [2024-07-24 23:58:19.377607] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1df4b40) on tqpair(0x1d94540): expected_datao=0, payload_size=8192 00:20:49.030 [2024-07-24 23:58:19.377617] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377727] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377741] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377750] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.030 [2024-07-24 23:58:19.377760] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.030 [2024-07-24 23:58:19.377766] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377776] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d94540): datao=0, datal=512, cccid=4 00:20:49.030 [2024-07-24 23:58:19.377789] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1df49c0) on tqpair(0x1d94540): expected_datao=0, payload_size=512 00:20:49.030 [2024-07-24 23:58:19.377800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377815] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377828] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377838] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.030 [2024-07-24 23:58:19.377847] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.030 [2024-07-24 23:58:19.377854] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377860] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d94540): datao=0, datal=512, cccid=6 00:20:49.030 [2024-07-24 23:58:19.377868] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1df4cc0) on tqpair(0x1d94540): expected_datao=0, payload_size=512 00:20:49.030 [2024-07-24 23:58:19.377876] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377885] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377892] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377901] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:49.030 [2024-07-24 23:58:19.377910] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:49.030 [2024-07-24 23:58:19.377916] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377922] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1d94540): datao=0, datal=4096, cccid=7 00:20:49.030 [2024-07-24 23:58:19.377930] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1df4e40) on tqpair(0x1d94540): expected_datao=0, payload_size=4096 00:20:49.030 [2024-07-24 23:58:19.377941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377952] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377960] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.030 [2024-07-24 23:58:19.377981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.030 [2024-07-24 23:58:19.377988] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.030 [2024-07-24 23:58:19.377995] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4b40) on tqpair=0x1d94540 00:20:49.030 [2024-07-24 23:58:19.378030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.030 [2024-07-24 23:58:19.378041] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.031 [2024-07-24 23:58:19.378048] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.031 [2024-07-24 23:58:19.378054] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df49c0) on tqpair=0x1d94540 00:20:49.031 [2024-07-24 23:58:19.378084] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.031 [2024-07-24 23:58:19.378095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.031 [2024-07-24 23:58:19.378101] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.031 [2024-07-24 23:58:19.378107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4cc0) on tqpair=0x1d94540 00:20:49.031 [2024-07-24 23:58:19.378117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.031 [2024-07-24 23:58:19.378126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.031 [2024-07-24 23:58:19.378133] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.031 [2024-07-24 23:58:19.378139] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4e40) on tqpair=0x1d94540 00:20:49.031 ===================================================== 00:20:49.031 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:49.031 ===================================================== 00:20:49.031 Controller Capabilities/Features 00:20:49.031 ================================ 00:20:49.031 Vendor ID: 8086 00:20:49.031 Subsystem Vendor ID: 8086 00:20:49.031 Serial Number: SPDK00000000000001 00:20:49.031 Model Number: SPDK bdev Controller 00:20:49.031 Firmware Version: 24.09 00:20:49.031 Recommended Arb Burst: 6 00:20:49.031 IEEE OUI Identifier: e4 d2 5c 00:20:49.031 Multi-path I/O 00:20:49.031 May have multiple subsystem ports: Yes 00:20:49.031 May have multiple controllers: Yes 00:20:49.031 Associated with SR-IOV VF: No 00:20:49.031 Max Data Transfer Size: 131072 00:20:49.031 Max Number of Namespaces: 32 00:20:49.031 Max Number of I/O Queues: 127 00:20:49.031 NVMe Specification Version (VS): 1.3 00:20:49.031 NVMe Specification Version (Identify): 1.3 00:20:49.031 Maximum Queue Entries: 128 00:20:49.031 Contiguous Queues Required: Yes 00:20:49.031 Arbitration Mechanisms Supported 00:20:49.031 Weighted Round Robin: Not Supported 00:20:49.031 Vendor Specific: Not Supported 00:20:49.031 Reset Timeout: 15000 ms 00:20:49.031 Doorbell Stride: 4 bytes 00:20:49.031 NVM Subsystem Reset: Not Supported 00:20:49.031 Command Sets Supported 00:20:49.031 NVM Command Set: Supported 00:20:49.031 Boot Partition: Not Supported 00:20:49.031 Memory Page Size Minimum: 4096 bytes 00:20:49.031 Memory Page Size Maximum: 4096 bytes 00:20:49.031 Persistent Memory Region: Not Supported 00:20:49.031 Optional Asynchronous Events Supported 00:20:49.031 Namespace Attribute Notices: Supported 00:20:49.031 Firmware Activation Notices: Not Supported 00:20:49.031 ANA Change Notices: Not Supported 00:20:49.031 PLE Aggregate Log Change Notices: Not Supported 00:20:49.031 LBA Status Info Alert Notices: Not Supported 00:20:49.031 EGE Aggregate Log Change Notices: Not Supported 00:20:49.031 Normal NVM Subsystem Shutdown event: Not Supported 00:20:49.031 Zone Descriptor Change Notices: Not Supported 00:20:49.031 Discovery Log Change Notices: Not Supported 00:20:49.031 Controller Attributes 00:20:49.031 128-bit Host Identifier: Supported 00:20:49.031 Non-Operational Permissive Mode: Not Supported 00:20:49.031 NVM Sets: Not Supported 00:20:49.031 Read Recovery Levels: Not Supported 00:20:49.031 Endurance Groups: Not Supported 00:20:49.031 Predictable Latency Mode: Not Supported 00:20:49.031 Traffic Based Keep ALive: Not Supported 00:20:49.031 Namespace Granularity: Not Supported 00:20:49.031 SQ Associations: Not Supported 00:20:49.031 UUID List: Not Supported 00:20:49.031 Multi-Domain Subsystem: Not Supported 00:20:49.031 Fixed Capacity Management: Not Supported 00:20:49.031 Variable Capacity Management: Not Supported 00:20:49.031 Delete Endurance Group: Not Supported 00:20:49.031 Delete NVM Set: Not Supported 00:20:49.031 Extended LBA Formats Supported: Not Supported 00:20:49.031 Flexible Data Placement Supported: Not Supported 00:20:49.031 00:20:49.031 Controller Memory Buffer Support 00:20:49.031 ================================ 00:20:49.031 Supported: No 00:20:49.031 00:20:49.031 Persistent Memory Region Support 00:20:49.031 ================================ 00:20:49.031 Supported: No 00:20:49.031 00:20:49.031 Admin Command Set Attributes 00:20:49.031 ============================ 00:20:49.031 Security Send/Receive: Not Supported 00:20:49.031 Format NVM: Not Supported 00:20:49.031 Firmware Activate/Download: Not Supported 00:20:49.031 Namespace Management: Not Supported 00:20:49.031 Device Self-Test: Not Supported 00:20:49.031 Directives: Not Supported 00:20:49.031 NVMe-MI: Not Supported 00:20:49.031 Virtualization Management: Not Supported 00:20:49.031 Doorbell Buffer Config: Not Supported 00:20:49.031 Get LBA Status Capability: Not Supported 00:20:49.031 Command & Feature Lockdown Capability: Not Supported 00:20:49.031 Abort Command Limit: 4 00:20:49.031 Async Event Request Limit: 4 00:20:49.031 Number of Firmware Slots: N/A 00:20:49.031 Firmware Slot 1 Read-Only: N/A 00:20:49.031 Firmware Activation Without Reset: N/A 00:20:49.031 Multiple Update Detection Support: N/A 00:20:49.031 Firmware Update Granularity: No Information Provided 00:20:49.031 Per-Namespace SMART Log: No 00:20:49.031 Asymmetric Namespace Access Log Page: Not Supported 00:20:49.031 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:49.031 Command Effects Log Page: Supported 00:20:49.031 Get Log Page Extended Data: Supported 00:20:49.031 Telemetry Log Pages: Not Supported 00:20:49.031 Persistent Event Log Pages: Not Supported 00:20:49.031 Supported Log Pages Log Page: May Support 00:20:49.031 Commands Supported & Effects Log Page: Not Supported 00:20:49.031 Feature Identifiers & Effects Log Page:May Support 00:20:49.031 NVMe-MI Commands & Effects Log Page: May Support 00:20:49.031 Data Area 4 for Telemetry Log: Not Supported 00:20:49.031 Error Log Page Entries Supported: 128 00:20:49.031 Keep Alive: Supported 00:20:49.031 Keep Alive Granularity: 10000 ms 00:20:49.031 00:20:49.031 NVM Command Set Attributes 00:20:49.031 ========================== 00:20:49.031 Submission Queue Entry Size 00:20:49.031 Max: 64 00:20:49.031 Min: 64 00:20:49.031 Completion Queue Entry Size 00:20:49.031 Max: 16 00:20:49.031 Min: 16 00:20:49.031 Number of Namespaces: 32 00:20:49.031 Compare Command: Supported 00:20:49.031 Write Uncorrectable Command: Not Supported 00:20:49.031 Dataset Management Command: Supported 00:20:49.031 Write Zeroes Command: Supported 00:20:49.031 Set Features Save Field: Not Supported 00:20:49.031 Reservations: Supported 00:20:49.031 Timestamp: Not Supported 00:20:49.031 Copy: Supported 00:20:49.031 Volatile Write Cache: Present 00:20:49.031 Atomic Write Unit (Normal): 1 00:20:49.031 Atomic Write Unit (PFail): 1 00:20:49.031 Atomic Compare & Write Unit: 1 00:20:49.031 Fused Compare & Write: Supported 00:20:49.031 Scatter-Gather List 00:20:49.031 SGL Command Set: Supported 00:20:49.031 SGL Keyed: Supported 00:20:49.031 SGL Bit Bucket Descriptor: Not Supported 00:20:49.031 SGL Metadata Pointer: Not Supported 00:20:49.031 Oversized SGL: Not Supported 00:20:49.031 SGL Metadata Address: Not Supported 00:20:49.031 SGL Offset: Supported 00:20:49.031 Transport SGL Data Block: Not Supported 00:20:49.031 Replay Protected Memory Block: Not Supported 00:20:49.031 00:20:49.031 Firmware Slot Information 00:20:49.031 ========================= 00:20:49.031 Active slot: 1 00:20:49.031 Slot 1 Firmware Revision: 24.09 00:20:49.031 00:20:49.031 00:20:49.031 Commands Supported and Effects 00:20:49.031 ============================== 00:20:49.031 Admin Commands 00:20:49.031 -------------- 00:20:49.031 Get Log Page (02h): Supported 00:20:49.031 Identify (06h): Supported 00:20:49.031 Abort (08h): Supported 00:20:49.031 Set Features (09h): Supported 00:20:49.031 Get Features (0Ah): Supported 00:20:49.031 Asynchronous Event Request (0Ch): Supported 00:20:49.031 Keep Alive (18h): Supported 00:20:49.031 I/O Commands 00:20:49.031 ------------ 00:20:49.031 Flush (00h): Supported LBA-Change 00:20:49.031 Write (01h): Supported LBA-Change 00:20:49.031 Read (02h): Supported 00:20:49.031 Compare (05h): Supported 00:20:49.031 Write Zeroes (08h): Supported LBA-Change 00:20:49.031 Dataset Management (09h): Supported LBA-Change 00:20:49.031 Copy (19h): Supported LBA-Change 00:20:49.031 00:20:49.031 Error Log 00:20:49.031 ========= 00:20:49.032 00:20:49.032 Arbitration 00:20:49.032 =========== 00:20:49.032 Arbitration Burst: 1 00:20:49.032 00:20:49.032 Power Management 00:20:49.032 ================ 00:20:49.032 Number of Power States: 1 00:20:49.032 Current Power State: Power State #0 00:20:49.032 Power State #0: 00:20:49.032 Max Power: 0.00 W 00:20:49.032 Non-Operational State: Operational 00:20:49.032 Entry Latency: Not Reported 00:20:49.032 Exit Latency: Not Reported 00:20:49.032 Relative Read Throughput: 0 00:20:49.032 Relative Read Latency: 0 00:20:49.032 Relative Write Throughput: 0 00:20:49.032 Relative Write Latency: 0 00:20:49.032 Idle Power: Not Reported 00:20:49.032 Active Power: Not Reported 00:20:49.032 Non-Operational Permissive Mode: Not Supported 00:20:49.032 00:20:49.032 Health Information 00:20:49.032 ================== 00:20:49.032 Critical Warnings: 00:20:49.032 Available Spare Space: OK 00:20:49.032 Temperature: OK 00:20:49.032 Device Reliability: OK 00:20:49.032 Read Only: No 00:20:49.032 Volatile Memory Backup: OK 00:20:49.032 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:49.032 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:49.032 Available Spare: 0% 00:20:49.032 Available Spare Threshold: 0% 00:20:49.032 Life Percentage Used:[2024-07-24 23:58:19.378279] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.378308] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1d94540) 00:20:49.032 [2024-07-24 23:58:19.378320] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.032 [2024-07-24 23:58:19.378344] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4e40, cid 7, qid 0 00:20:49.032 [2024-07-24 23:58:19.378482] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.032 [2024-07-24 23:58:19.378498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.032 [2024-07-24 23:58:19.378505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.378512] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4e40) on tqpair=0x1d94540 00:20:49.032 [2024-07-24 23:58:19.378560] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:49.032 [2024-07-24 23:58:19.378581] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df43c0) on tqpair=0x1d94540 00:20:49.032 [2024-07-24 23:58:19.378595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.032 [2024-07-24 23:58:19.378605] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4540) on tqpair=0x1d94540 00:20:49.032 [2024-07-24 23:58:19.378612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.032 [2024-07-24 23:58:19.378621] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df46c0) on tqpair=0x1d94540 00:20:49.032 [2024-07-24 23:58:19.378628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.032 [2024-07-24 23:58:19.378637] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4840) on tqpair=0x1d94540 00:20:49.032 [2024-07-24 23:58:19.378660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:49.032 [2024-07-24 23:58:19.378677] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.378686] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.378693] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d94540) 00:20:49.032 [2024-07-24 23:58:19.378703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.032 [2024-07-24 23:58:19.378740] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4840, cid 3, qid 0 00:20:49.032 [2024-07-24 23:58:19.378927] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.032 [2024-07-24 23:58:19.378945] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.032 [2024-07-24 23:58:19.378953] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.378960] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4840) on tqpair=0x1d94540 00:20:49.032 [2024-07-24 23:58:19.378972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.378980] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.378986] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d94540) 00:20:49.032 [2024-07-24 23:58:19.378997] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.032 [2024-07-24 23:58:19.379026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4840, cid 3, qid 0 00:20:49.032 [2024-07-24 23:58:19.379147] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.032 [2024-07-24 23:58:19.379165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.032 [2024-07-24 23:58:19.379173] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.379179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4840) on tqpair=0x1d94540 00:20:49.032 [2024-07-24 23:58:19.379187] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:49.032 [2024-07-24 23:58:19.379196] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:49.032 [2024-07-24 23:58:19.379214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.379224] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.379231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d94540) 00:20:49.032 [2024-07-24 23:58:19.379253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.032 [2024-07-24 23:58:19.379279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4840, cid 3, qid 0 00:20:49.032 [2024-07-24 23:58:19.379407] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.032 [2024-07-24 23:58:19.379423] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.032 [2024-07-24 23:58:19.379430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.379437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4840) on tqpair=0x1d94540 00:20:49.032 [2024-07-24 23:58:19.379456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.379467] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.379474] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d94540) 00:20:49.032 [2024-07-24 23:58:19.379485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.032 [2024-07-24 23:58:19.379507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4840, cid 3, qid 0 00:20:49.032 [2024-07-24 23:58:19.379609] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.032 [2024-07-24 23:58:19.379627] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.032 [2024-07-24 23:58:19.379635] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.379646] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4840) on tqpair=0x1d94540 00:20:49.032 [2024-07-24 23:58:19.379664] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.379676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.379683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d94540) 00:20:49.032 [2024-07-24 23:58:19.379694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.032 [2024-07-24 23:58:19.379716] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4840, cid 3, qid 0 00:20:49.032 [2024-07-24 23:58:19.379821] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.032 [2024-07-24 23:58:19.379838] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.032 [2024-07-24 23:58:19.379846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.379853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4840) on tqpair=0x1d94540 00:20:49.032 [2024-07-24 23:58:19.379870] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.379880] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.379890] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d94540) 00:20:49.032 [2024-07-24 23:58:19.379901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.032 [2024-07-24 23:58:19.379923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4840, cid 3, qid 0 00:20:49.032 [2024-07-24 23:58:19.380030] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.032 [2024-07-24 23:58:19.380047] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.032 [2024-07-24 23:58:19.380055] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.380062] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4840) on tqpair=0x1d94540 00:20:49.032 [2024-07-24 23:58:19.380079] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.380089] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.380099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d94540) 00:20:49.032 [2024-07-24 23:58:19.380111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.032 [2024-07-24 23:58:19.380133] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4840, cid 3, qid 0 00:20:49.032 [2024-07-24 23:58:19.380234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.032 [2024-07-24 23:58:19.380260] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.032 [2024-07-24 23:58:19.380269] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.380294] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4840) on tqpair=0x1d94540 00:20:49.032 [2024-07-24 23:58:19.380314] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.380326] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.032 [2024-07-24 23:58:19.380332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d94540) 00:20:49.032 [2024-07-24 23:58:19.380344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.033 [2024-07-24 23:58:19.380366] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4840, cid 3, qid 0 00:20:49.033 [2024-07-24 23:58:19.380484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.033 [2024-07-24 23:58:19.380500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.033 [2024-07-24 23:58:19.380507] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.380514] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4840) on tqpair=0x1d94540 00:20:49.033 [2024-07-24 23:58:19.380537] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.380549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.380555] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d94540) 00:20:49.033 [2024-07-24 23:58:19.380566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.033 [2024-07-24 23:58:19.380588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4840, cid 3, qid 0 00:20:49.033 [2024-07-24 23:58:19.380702] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.033 [2024-07-24 23:58:19.380718] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.033 [2024-07-24 23:58:19.380725] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.380731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4840) on tqpair=0x1d94540 00:20:49.033 [2024-07-24 23:58:19.380750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.380761] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.380768] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d94540) 00:20:49.033 [2024-07-24 23:58:19.380779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.033 [2024-07-24 23:58:19.380801] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4840, cid 3, qid 0 00:20:49.033 [2024-07-24 23:58:19.380914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.033 [2024-07-24 23:58:19.380930] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.033 [2024-07-24 23:58:19.380937] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.380944] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4840) on tqpair=0x1d94540 00:20:49.033 [2024-07-24 23:58:19.380962] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.380973] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.380980] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d94540) 00:20:49.033 [2024-07-24 23:58:19.380991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.033 [2024-07-24 23:58:19.381012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4840, cid 3, qid 0 00:20:49.033 [2024-07-24 23:58:19.381128] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.033 [2024-07-24 23:58:19.381145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.033 [2024-07-24 23:58:19.381153] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.381160] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4840) on tqpair=0x1d94540 00:20:49.033 [2024-07-24 23:58:19.381178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.381190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.381196] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d94540) 00:20:49.033 [2024-07-24 23:58:19.381208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.033 [2024-07-24 23:58:19.381230] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4840, cid 3, qid 0 00:20:49.033 [2024-07-24 23:58:19.385276] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.033 [2024-07-24 23:58:19.385295] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.033 [2024-07-24 23:58:19.385303] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.385310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4840) on tqpair=0x1d94540 00:20:49.033 [2024-07-24 23:58:19.385330] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.385350] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.385365] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1d94540) 00:20:49.033 [2024-07-24 23:58:19.385382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:49.033 [2024-07-24 23:58:19.385420] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1df4840, cid 3, qid 0 00:20:49.033 [2024-07-24 23:58:19.385568] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:49.033 [2024-07-24 23:58:19.385590] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:49.033 [2024-07-24 23:58:19.385605] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:49.033 [2024-07-24 23:58:19.385618] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1df4840) on tqpair=0x1d94540 00:20:49.033 [2024-07-24 23:58:19.385641] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:20:49.033 0% 00:20:49.033 Data Units Read: 0 00:20:49.033 Data Units Written: 0 00:20:49.033 Host Read Commands: 0 00:20:49.033 Host Write Commands: 0 00:20:49.033 Controller Busy Time: 0 minutes 00:20:49.033 Power Cycles: 0 00:20:49.033 Power On Hours: 0 hours 00:20:49.033 Unsafe Shutdowns: 0 00:20:49.033 Unrecoverable Media Errors: 0 00:20:49.033 Lifetime Error Log Entries: 0 00:20:49.033 Warning Temperature Time: 0 minutes 00:20:49.033 Critical Temperature Time: 0 minutes 00:20:49.033 00:20:49.033 Number of Queues 00:20:49.033 ================ 00:20:49.033 Number of I/O Submission Queues: 127 00:20:49.033 Number of I/O Completion Queues: 127 00:20:49.033 00:20:49.033 Active Namespaces 00:20:49.033 ================= 00:20:49.033 Namespace ID:1 00:20:49.033 Error Recovery Timeout: Unlimited 00:20:49.033 Command Set Identifier: NVM (00h) 00:20:49.033 Deallocate: Supported 00:20:49.033 Deallocated/Unwritten Error: Not Supported 00:20:49.033 Deallocated Read Value: Unknown 00:20:49.033 Deallocate in Write Zeroes: Not Supported 00:20:49.033 Deallocated Guard Field: 0xFFFF 00:20:49.033 Flush: Supported 00:20:49.033 Reservation: Supported 00:20:49.033 Namespace Sharing Capabilities: Multiple Controllers 00:20:49.033 Size (in LBAs): 131072 (0GiB) 00:20:49.033 Capacity (in LBAs): 131072 (0GiB) 00:20:49.033 Utilization (in LBAs): 131072 (0GiB) 00:20:49.033 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:49.033 EUI64: ABCDEF0123456789 00:20:49.033 UUID: 6b0a6a27-fe59-4c0a-8cd6-caaa00775328 00:20:49.033 Thin Provisioning: Not Supported 00:20:49.033 Per-NS Atomic Units: Yes 00:20:49.033 Atomic Boundary Size (Normal): 0 00:20:49.033 Atomic Boundary Size (PFail): 0 00:20:49.033 Atomic Boundary Offset: 0 00:20:49.033 Maximum Single Source Range Length: 65535 00:20:49.033 Maximum Copy Length: 65535 00:20:49.033 Maximum Source Range Count: 1 00:20:49.033 NGUID/EUI64 Never Reused: No 00:20:49.033 Namespace Write Protected: No 00:20:49.033 Number of LBA Formats: 1 00:20:49.033 Current LBA Format: LBA Format #00 00:20:49.033 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:49.033 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:49.033 rmmod nvme_tcp 00:20:49.033 rmmod nvme_fabrics 00:20:49.033 rmmod nvme_keyring 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 3430082 ']' 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 3430082 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 3430082 ']' 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 3430082 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3430082 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3430082' 00:20:49.033 killing process with pid 3430082 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 3430082 00:20:49.033 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 3430082 00:20:49.292 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:49.292 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:49.292 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:49.292 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.292 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.292 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.292 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.292 23:58:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:51.820 00:20:51.820 real 0m5.608s 00:20:51.820 user 0m4.883s 00:20:51.820 sys 0m1.885s 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:51.820 ************************************ 00:20:51.820 END TEST nvmf_identify 00:20:51.820 ************************************ 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.820 ************************************ 00:20:51.820 START TEST nvmf_perf 00:20:51.820 ************************************ 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:51.820 * Looking for test storage... 00:20:51.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.820 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:51.821 23:58:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:53.718 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.718 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:53.719 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:53.719 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:53.719 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:53.719 23:58:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:53.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:20:53.719 00:20:53.719 --- 10.0.0.2 ping statistics --- 00:20:53.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.719 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:20:53.719 00:20:53.719 --- 10.0.0.1 ping statistics --- 00:20:53.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.719 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=3432159 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 3432159 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 3432159 ']' 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.719 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:53.719 [2024-07-24 23:58:24.102768] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:20:53.719 [2024-07-24 23:58:24.102854] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.719 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.719 [2024-07-24 23:58:24.166101] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.719 [2024-07-24 23:58:24.275232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.719 [2024-07-24 23:58:24.275301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.719 [2024-07-24 23:58:24.275330] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.719 [2024-07-24 23:58:24.275342] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.719 [2024-07-24 23:58:24.275352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.719 [2024-07-24 23:58:24.275403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.719 [2024-07-24 23:58:24.275464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.719 [2024-07-24 23:58:24.275531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.719 [2024-07-24 23:58:24.275533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.977 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:53.977 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:20:53.977 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:53.977 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:53.977 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:53.977 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.977 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:53.977 23:58:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:57.250 23:58:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:57.250 23:58:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:57.250 23:58:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:20:57.250 23:58:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:57.507 23:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:57.507 23:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:20:57.507 23:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:57.507 23:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:57.508 23:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:57.764 [2024-07-24 23:58:28.262102] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.764 23:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:58.022 23:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:58.022 23:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:58.279 23:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:58.279 23:58:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:58.536 23:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:58.794 [2024-07-24 23:58:29.265735] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:58.794 23:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:59.050 23:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:20:59.050 23:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:59.050 23:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:59.050 23:58:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:21:00.421 Initializing NVMe Controllers 00:21:00.421 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:21:00.421 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:21:00.421 Initialization complete. Launching workers. 00:21:00.421 ======================================================== 00:21:00.421 Latency(us) 00:21:00.421 Device Information : IOPS MiB/s Average min max 00:21:00.421 PCIE (0000:88:00.0) NSID 1 from core 0: 84109.01 328.55 379.94 43.18 4481.85 00:21:00.421 ======================================================== 00:21:00.421 Total : 84109.01 328.55 379.94 43.18 4481.85 00:21:00.421 00:21:00.421 23:58:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:00.421 EAL: No free 2048 kB hugepages reported on node 1 00:21:01.793 Initializing NVMe Controllers 00:21:01.794 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:01.794 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:01.794 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:01.794 Initialization complete. Launching workers. 00:21:01.794 ======================================================== 00:21:01.794 Latency(us) 00:21:01.794 Device Information : IOPS MiB/s Average min max 00:21:01.794 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 101.00 0.39 9945.42 167.48 45753.11 00:21:01.794 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 55.00 0.21 18843.74 6974.46 48864.16 00:21:01.794 ======================================================== 00:21:01.794 Total : 156.00 0.61 13082.65 167.48 48864.16 00:21:01.794 00:21:01.794 23:58:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:01.794 EAL: No free 2048 kB hugepages reported on node 1 00:21:03.187 Initializing NVMe Controllers 00:21:03.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:03.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:03.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:03.187 Initialization complete. Launching workers. 00:21:03.187 ======================================================== 00:21:03.187 Latency(us) 00:21:03.187 Device Information : IOPS MiB/s Average min max 00:21:03.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8470.99 33.09 3779.13 647.25 7463.52 00:21:03.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3954.00 15.45 8136.51 6789.59 15704.44 00:21:03.187 ======================================================== 00:21:03.187 Total : 12424.99 48.54 5165.78 647.25 15704.44 00:21:03.187 00:21:03.187 23:58:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:21:03.187 23:58:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:21:03.187 23:58:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:03.187 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.727 Initializing NVMe Controllers 00:21:05.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.727 Controller IO queue size 128, less than required. 00:21:05.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.727 Controller IO queue size 128, less than required. 00:21:05.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:05.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:05.727 Initialization complete. Launching workers. 00:21:05.727 ======================================================== 00:21:05.727 Latency(us) 00:21:05.727 Device Information : IOPS MiB/s Average min max 00:21:05.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1368.32 342.08 95952.04 54098.30 188508.40 00:21:05.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 574.79 143.70 227259.94 70388.50 361067.18 00:21:05.727 ======================================================== 00:21:05.727 Total : 1943.11 485.78 134794.34 54098.30 361067.18 00:21:05.727 00:21:05.727 23:58:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:05.727 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.727 No valid NVMe controllers or AIO or URING devices found 00:21:05.727 Initializing NVMe Controllers 00:21:05.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:05.727 Controller IO queue size 128, less than required. 00:21:05.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.727 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:05.727 Controller IO queue size 128, less than required. 00:21:05.727 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:05.727 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:05.727 WARNING: Some requested NVMe devices were skipped 00:21:05.727 23:58:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:05.727 EAL: No free 2048 kB hugepages reported on node 1 00:21:09.007 Initializing NVMe Controllers 00:21:09.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:09.007 Controller IO queue size 128, less than required. 00:21:09.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:09.007 Controller IO queue size 128, less than required. 00:21:09.007 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:09.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:09.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:09.007 Initialization complete. Launching workers. 00:21:09.007 00:21:09.007 ==================== 00:21:09.007 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:09.007 TCP transport: 00:21:09.007 polls: 16951 00:21:09.007 idle_polls: 9200 00:21:09.007 sock_completions: 7751 00:21:09.007 nvme_completions: 5397 00:21:09.007 submitted_requests: 8076 00:21:09.007 queued_requests: 1 00:21:09.007 00:21:09.007 ==================== 00:21:09.007 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:09.007 TCP transport: 00:21:09.007 polls: 14324 00:21:09.007 idle_polls: 6135 00:21:09.007 sock_completions: 8189 00:21:09.007 nvme_completions: 5605 00:21:09.007 submitted_requests: 8436 00:21:09.007 queued_requests: 1 00:21:09.007 ======================================================== 00:21:09.007 Latency(us) 00:21:09.007 Device Information : IOPS MiB/s Average min max 00:21:09.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1347.38 336.84 98411.68 55327.37 178477.20 00:21:09.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1399.31 349.83 92544.82 45881.98 138061.26 00:21:09.007 ======================================================== 00:21:09.007 Total : 2746.69 686.67 95422.78 45881.98 178477.20 00:21:09.007 00:21:09.007 23:58:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:09.007 23:58:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:09.007 rmmod nvme_tcp 00:21:09.007 rmmod nvme_fabrics 00:21:09.007 rmmod nvme_keyring 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 3432159 ']' 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 3432159 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 3432159 ']' 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 3432159 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3432159 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3432159' 00:21:09.007 killing process with pid 3432159 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 3432159 00:21:09.007 23:58:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 3432159 00:21:10.378 23:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:10.378 23:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:10.378 23:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:10.378 23:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:10.378 23:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:10.378 23:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:10.378 23:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:10.378 23:58:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:12.908 00:21:12.908 real 0m21.098s 00:21:12.908 user 1m5.431s 00:21:12.908 sys 0m5.067s 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:12.908 ************************************ 00:21:12.908 END TEST nvmf_perf 00:21:12.908 ************************************ 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.908 ************************************ 00:21:12.908 START TEST nvmf_fio_host 00:21:12.908 ************************************ 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:12.908 * Looking for test storage... 00:21:12.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.908 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:21:12.909 23:58:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:14.807 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:14.807 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.807 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:14.807 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:14.808 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:14.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:21:14.808 00:21:14.808 --- 10.0.0.2 ping statistics --- 00:21:14.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.808 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:14.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:21:14.808 00:21:14.808 --- 10.0.0.1 ping statistics --- 00:21:14.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.808 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3436000 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3436000 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 3436000 ']' 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:14.808 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.808 [2024-07-24 23:58:45.271108] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:21:14.808 [2024-07-24 23:58:45.271182] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.808 EAL: No free 2048 kB hugepages reported on node 1 00:21:14.808 [2024-07-24 23:58:45.339066] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:15.066 [2024-07-24 23:58:45.462694] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.066 [2024-07-24 23:58:45.462760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.066 [2024-07-24 23:58:45.462777] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.066 [2024-07-24 23:58:45.462791] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.066 [2024-07-24 23:58:45.462802] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.066 [2024-07-24 23:58:45.466267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.066 [2024-07-24 23:58:45.466324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.066 [2024-07-24 23:58:45.466404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:15.066 [2024-07-24 23:58:45.466408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:15.066 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:15.066 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:21:15.066 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:15.323 [2024-07-24 23:58:45.817454] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.323 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:15.323 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:15.323 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.323 23:58:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:15.581 Malloc1 00:21:15.581 23:58:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:15.837 23:58:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:16.094 23:58:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.352 [2024-07-24 23:58:46.873257] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.352 23:58:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:16.609 23:58:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:16.867 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:16.867 fio-3.35 00:21:16.867 Starting 1 thread 00:21:16.867 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.395 00:21:19.395 test: (groupid=0, jobs=1): err= 0: pid=3436395: Wed Jul 24 23:58:49 2024 00:21:19.395 read: IOPS=9033, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2006msec) 00:21:19.395 slat (usec): min=2, max=256, avg= 2.63, stdev= 2.57 00:21:19.395 clat (usec): min=2742, max=13196, avg=7815.80, stdev=618.95 00:21:19.395 lat (usec): min=2784, max=13199, avg=7818.43, stdev=618.83 00:21:19.395 clat percentiles (usec): 00:21:19.395 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:21:19.395 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:21:19.395 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:21:19.395 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11600], 99.95th=[12780], 00:21:19.395 | 99.99th=[13173] 00:21:19.395 bw ( KiB/s): min=35288, max=36768, per=99.90%, avg=36098.00, stdev=618.48, samples=4 00:21:19.395 iops : min= 8822, max= 9192, avg=9024.50, stdev=154.62, samples=4 00:21:19.395 write: IOPS=9053, BW=35.4MiB/s (37.1MB/s)(70.9MiB/2006msec); 0 zone resets 00:21:19.395 slat (usec): min=2, max=211, avg= 2.79, stdev= 1.94 00:21:19.395 clat (usec): min=1990, max=12107, avg=6303.42, stdev=516.90 00:21:19.395 lat (usec): min=2004, max=12110, avg=6306.21, stdev=516.87 00:21:19.395 clat percentiles (usec): 00:21:19.395 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5735], 20.00th=[ 5932], 00:21:19.395 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6390], 00:21:19.395 | 70.00th=[ 6521], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7111], 00:21:19.395 | 99.00th=[ 7504], 99.50th=[ 7701], 99.90th=[ 9765], 99.95th=[10814], 00:21:19.395 | 99.99th=[11600] 00:21:19.395 bw ( KiB/s): min=35968, max=36536, per=100.00%, avg=36212.00, stdev=254.71, samples=4 00:21:19.395 iops : min= 8992, max= 9134, avg=9053.00, stdev=63.68, samples=4 00:21:19.395 lat (msec) : 2=0.01%, 4=0.10%, 10=99.75%, 20=0.14% 00:21:19.395 cpu : usr=56.81%, sys=38.80%, ctx=83, majf=0, minf=38 00:21:19.395 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:19.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:19.395 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:19.395 issued rwts: total=18121,18161,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:19.395 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:19.395 00:21:19.395 Run status group 0 (all jobs): 00:21:19.395 READ: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.2MB), run=2006-2006msec 00:21:19.395 WRITE: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=70.9MiB (74.4MB), run=2006-2006msec 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local sanitizers 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # shift 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local asan_lib= 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libasan 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # asan_lib= 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:19.395 23:58:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:19.395 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:19.395 fio-3.35 00:21:19.395 Starting 1 thread 00:21:19.652 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.178 00:21:22.178 test: (groupid=0, jobs=1): err= 0: pid=3436810: Wed Jul 24 23:58:52 2024 00:21:22.178 read: IOPS=8416, BW=132MiB/s (138MB/s)(263MiB/2001msec) 00:21:22.178 slat (nsec): min=2842, max=98727, avg=3678.05, stdev=1557.95 00:21:22.178 clat (usec): min=363, max=17256, avg=8747.51, stdev=2048.37 00:21:22.178 lat (usec): min=372, max=17260, avg=8751.19, stdev=2048.39 00:21:22.178 clat percentiles (usec): 00:21:22.178 | 1.00th=[ 4424], 5.00th=[ 5473], 10.00th=[ 6128], 20.00th=[ 7046], 00:21:22.178 | 30.00th=[ 7635], 40.00th=[ 8160], 50.00th=[ 8586], 60.00th=[ 9110], 00:21:22.178 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11338], 95.00th=[12256], 00:21:22.178 | 99.00th=[13960], 99.50th=[14746], 99.90th=[15926], 99.95th=[16057], 00:21:22.178 | 99.99th=[16319] 00:21:22.178 bw ( KiB/s): min=61472, max=76039, per=51.01%, avg=68695.67, stdev=7284.24, samples=3 00:21:22.178 iops : min= 3842, max= 4752, avg=4293.33, stdev=455.04, samples=3 00:21:22.178 write: IOPS=4995, BW=78.0MiB/s (81.8MB/s)(145MiB/1858msec); 0 zone resets 00:21:22.178 slat (usec): min=30, max=148, avg=33.37, stdev= 5.11 00:21:22.178 clat (usec): min=5632, max=20030, avg=10975.72, stdev=2080.27 00:21:22.178 lat (usec): min=5664, max=20062, avg=11009.09, stdev=2080.66 00:21:22.178 clat percentiles (usec): 00:21:22.178 | 1.00th=[ 7046], 5.00th=[ 7963], 10.00th=[ 8455], 20.00th=[ 9241], 00:21:22.178 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10814], 60.00th=[11338], 00:21:22.178 | 70.00th=[11863], 80.00th=[12518], 90.00th=[13698], 95.00th=[14615], 00:21:22.178 | 99.00th=[17171], 99.50th=[17957], 99.90th=[19530], 99.95th=[19792], 00:21:22.178 | 99.99th=[20055] 00:21:22.178 bw ( KiB/s): min=64064, max=78722, per=89.63%, avg=71638.00, stdev=7341.27, samples=3 00:21:22.178 iops : min= 4004, max= 4920, avg=4477.33, stdev=458.77, samples=3 00:21:22.178 lat (usec) : 500=0.01% 00:21:22.178 lat (msec) : 2=0.01%, 4=0.25%, 10=59.56%, 20=40.18%, 50=0.01% 00:21:22.178 cpu : usr=74.05%, sys=22.95%, ctx=36, majf=0, minf=62 00:21:22.178 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:22.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:22.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:22.178 issued rwts: total=16841,9281,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:22.178 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:22.178 00:21:22.178 Run status group 0 (all jobs): 00:21:22.178 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=263MiB (276MB), run=2001-2001msec 00:21:22.178 WRITE: bw=78.0MiB/s (81.8MB/s), 78.0MiB/s-78.0MiB/s (81.8MB/s-81.8MB/s), io=145MiB (152MB), run=1858-1858msec 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:22.178 rmmod nvme_tcp 00:21:22.178 rmmod nvme_fabrics 00:21:22.178 rmmod nvme_keyring 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 3436000 ']' 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 3436000 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 3436000 ']' 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 3436000 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3436000 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3436000' 00:21:22.178 killing process with pid 3436000 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 3436000 00:21:22.178 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 3436000 00:21:22.436 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:22.436 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:22.436 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:22.436 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:22.436 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:22.436 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.436 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:22.436 23:58:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.964 23:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:24.964 00:21:24.964 real 0m11.906s 00:21:24.964 user 0m34.981s 00:21:24.964 sys 0m3.969s 00:21:24.965 23:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:24.965 23:58:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.965 ************************************ 00:21:24.965 END TEST nvmf_fio_host 00:21:24.965 ************************************ 00:21:24.965 23:58:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:24.965 23:58:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:24.965 23:58:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:24.965 23:58:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:24.965 ************************************ 00:21:24.965 START TEST nvmf_failover 00:21:24.965 ************************************ 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:24.965 * Looking for test storage... 00:21:24.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:24.965 23:58:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:26.866 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:26.866 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:26.867 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:26.867 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:26.867 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:26.867 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:26.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:26.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:21:26.867 00:21:26.867 --- 10.0.0.2 ping statistics --- 00:21:26.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.867 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:26.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:26.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:21:26.867 00:21:26.867 --- 10.0.0.1 ping statistics --- 00:21:26.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:26.867 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:26.867 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:26.868 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:26.868 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=3439002 00:21:26.868 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:26.868 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 3439002 00:21:26.868 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3439002 ']' 00:21:26.868 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:26.868 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:26.868 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:26.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:26.868 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:26.868 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:26.868 [2024-07-24 23:58:57.285700] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:21:26.868 [2024-07-24 23:58:57.285795] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:26.868 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.868 [2024-07-24 23:58:57.352960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:26.868 [2024-07-24 23:58:57.469406] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:26.868 [2024-07-24 23:58:57.469458] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:26.868 [2024-07-24 23:58:57.469486] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:26.868 [2024-07-24 23:58:57.469498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:26.868 [2024-07-24 23:58:57.469508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:26.868 [2024-07-24 23:58:57.469598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:26.868 [2024-07-24 23:58:57.469677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.868 [2024-07-24 23:58:57.469680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.126 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.126 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:27.126 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:27.126 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:27.126 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:27.126 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:27.126 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:27.411 [2024-07-24 23:58:57.893079] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:27.411 23:58:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:27.669 Malloc0 00:21:27.669 23:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:27.927 23:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:28.491 23:58:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:28.491 [2024-07-24 23:58:59.100383] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:28.748 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:28.748 [2024-07-24 23:58:59.353128] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:29.005 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:29.005 [2024-07-24 23:58:59.598032] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:29.262 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3439301 00:21:29.262 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:29.262 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:29.262 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3439301 /var/tmp/bdevperf.sock 00:21:29.262 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3439301 ']' 00:21:29.262 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.262 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.262 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.262 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.262 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:29.520 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:29.520 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:29.520 23:58:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:29.777 NVMe0n1 00:21:29.777 23:59:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:30.341 00:21:30.341 23:59:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3439433 00:21:30.341 23:59:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:30.341 23:59:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:31.272 23:59:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:31.529 23:59:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:34.807 23:59:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:34.807 00:21:34.807 23:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:35.065 [2024-07-24 23:59:05.516116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516228] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516406] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516430] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516442] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516491] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.065 [2024-07-24 23:59:05.516515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516526] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516603] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516615] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516626] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516639] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516651] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516731] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516755] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516778] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516789] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516812] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 [2024-07-24 23:59:05.516834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae4d10 is same with the state(5) to be set 00:21:35.066 23:59:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:38.342 23:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:38.342 [2024-07-24 23:59:08.771535] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.342 23:59:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:39.275 23:59:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:39.533 [2024-07-24 23:59:10.074948] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075044] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075056] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075125] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075137] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075425] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 [2024-07-24 23:59:10.075485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ae5ab0 is same with the state(5) to be set 00:21:39.533 23:59:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3439433 00:21:46.101 0 00:21:46.101 23:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3439301 00:21:46.101 23:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3439301 ']' 00:21:46.101 23:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3439301 00:21:46.101 23:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:46.101 23:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.101 23:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3439301 00:21:46.101 23:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:46.101 23:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:46.101 23:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3439301' 00:21:46.101 killing process with pid 3439301 00:21:46.101 23:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3439301 00:21:46.101 23:59:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3439301 00:21:46.101 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:46.101 [2024-07-24 23:58:59.663478] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:21:46.101 [2024-07-24 23:58:59.663572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3439301 ] 00:21:46.101 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.101 [2024-07-24 23:58:59.722766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.101 [2024-07-24 23:58:59.834447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.101 Running I/O for 15 seconds... 00:21:46.101 [2024-07-24 23:59:01.910848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.101 [2024-07-24 23:59:01.910902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.101 [2024-07-24 23:59:01.910933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.101 [2024-07-24 23:59:01.910949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.101 [2024-07-24 23:59:01.910966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.101 [2024-07-24 23:59:01.910980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.101 [2024-07-24 23:59:01.910995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.101 [2024-07-24 23:59:01.911009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.101 [2024-07-24 23:59:01.911025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.102 [2024-07-24 23:59:01.911351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.102 [2024-07-24 23:59:01.911380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.102 [2024-07-24 23:59:01.911408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.102 [2024-07-24 23:59:01.911436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.102 [2024-07-24 23:59:01.911465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.102 [2024-07-24 23:59:01.911493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.102 [2024-07-24 23:59:01.911522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.102 [2024-07-24 23:59:01.911568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.911983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.911997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.912012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.912025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.912040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.912054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.912068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.912081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.912096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.912109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.912124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.912137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.912151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.102 [2024-07-24 23:59:01.912165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.102 [2024-07-24 23:59:01.912180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.103 [2024-07-24 23:59:01.912193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.103 [2024-07-24 23:59:01.912236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.103 [2024-07-24 23:59:01.912273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.103 [2024-07-24 23:59:01.912302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.103 [2024-07-24 23:59:01.912331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.103 [2024-07-24 23:59:01.912364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.103 [2024-07-24 23:59:01.912394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.103 [2024-07-24 23:59:01.912423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.912981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.912996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.913010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.913024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.913037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.913053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.913066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.913081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.913094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.913113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.913127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.913142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.913155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.913170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.913183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.913199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.913212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.913248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.913264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.913280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.913294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.913310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.913323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.913339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.913352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.103 [2024-07-24 23:59:01.913367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.103 [2024-07-24 23:59:01.913381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:77600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:77632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:77648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:77680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:77696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:77704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.913972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:77712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.913985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.914014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:77728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.914043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.104 [2024-07-24 23:59:01.914072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.104 [2024-07-24 23:59:01.914102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.104 [2024-07-24 23:59:01.914130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.104 [2024-07-24 23:59:01.914159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.104 [2024-07-24 23:59:01.914188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.104 [2024-07-24 23:59:01.914216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.104 [2024-07-24 23:59:01.914253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:77736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.914290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.914320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.914349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.914378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:77768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.914407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:77776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.914437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.914465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:77792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.914493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.914521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.104 [2024-07-24 23:59:01.914550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.104 [2024-07-24 23:59:01.914565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:01.914578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:01.914593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:01.914607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:01.914621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:01.914639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:01.914654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:01.914668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:01.914683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:01.914697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:01.914711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61cc10 is same with the state(5) to be set 00:21:46.105 [2024-07-24 23:59:01.914727] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.105 [2024-07-24 23:59:01.914738] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.105 [2024-07-24 23:59:01.914750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77856 len:8 PRP1 0x0 PRP2 0x0 00:21:46.105 [2024-07-24 23:59:01.914762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:01.914825] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x61cc10 was disconnected and freed. reset controller. 00:21:46.105 [2024-07-24 23:59:01.914843] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:46.105 [2024-07-24 23:59:01.914877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.105 [2024-07-24 23:59:01.914896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:01.914910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.105 [2024-07-24 23:59:01.914923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:01.914936] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.105 [2024-07-24 23:59:01.914950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:01.914964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.105 [2024-07-24 23:59:01.914977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:01.914989] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.105 [2024-07-24 23:59:01.915051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ff0f0 (9): Bad file descriptor 00:21:46.105 [2024-07-24 23:59:01.918310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.105 [2024-07-24 23:59:01.954886] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:46.105 [2024-07-24 23:59:05.517286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:73800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:73808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:73816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:73824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:73832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:73840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:73848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:73856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:73864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:73872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:73880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:73888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:73904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:73912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:73928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:73936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:73944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:73952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:73960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:73968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.517976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.517991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:73976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.518004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.518018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:73984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.518031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.518045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:73992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.518058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.105 [2024-07-24 23:59:05.518073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.105 [2024-07-24 23:59:05.518086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.106 [2024-07-24 23:59:05.518930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:74240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.106 [2024-07-24 23:59:05.518958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.106 [2024-07-24 23:59:05.518973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:74248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.106 [2024-07-24 23:59:05.518987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:74256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:74264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:74272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:74280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:74288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:74304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:74312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:74320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:74328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:74336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:74344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:74376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:74384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:74392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:74400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:74416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:74424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:74432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:74440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:74448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:74456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:74464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:74472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:74480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:74488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:74496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.519973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:74520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.519990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.520006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:74528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.520020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.520035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:74536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.520048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.520064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:74544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.520078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.520092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:74552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.107 [2024-07-24 23:59:05.520106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.107 [2024-07-24 23:59:05.520121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:74560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.108 [2024-07-24 23:59:05.520135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:74568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.108 [2024-07-24 23:59:05.520164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:74576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.108 [2024-07-24 23:59:05.520198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:74584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.108 [2024-07-24 23:59:05.520228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.108 [2024-07-24 23:59:05.520263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:74600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.108 [2024-07-24 23:59:05.520292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:74608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.108 [2024-07-24 23:59:05.520321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:74616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.108 [2024-07-24 23:59:05.520350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.520412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74624 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.520426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.520456] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.520467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74632 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.520479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.520502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.520513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74640 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.520526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.520549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.520560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74648 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.520572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.520596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.520607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74656 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.520619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.520642] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.520654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74664 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.520666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520678] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.520688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.520699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74672 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.520711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.520734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.520745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74680 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.520757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.520785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.520796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74688 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.520808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.520832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.520843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74696 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.520855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520867] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.520878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.520889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74704 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.520901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.520925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.520936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74712 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.520948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.520961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.520971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.520981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74720 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.520993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.521006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.521017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.521028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74728 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.521040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.521054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.521064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.521075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74736 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.521088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.521101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.521112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.521123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74744 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.521139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.521153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.521164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.521175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74752 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.521187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.521201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.108 [2024-07-24 23:59:05.521212] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.108 [2024-07-24 23:59:05.521222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74760 len:8 PRP1 0x0 PRP2 0x0 00:21:46.108 [2024-07-24 23:59:05.521235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.108 [2024-07-24 23:59:05.521258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.109 [2024-07-24 23:59:05.521271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.109 [2024-07-24 23:59:05.521282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74768 len:8 PRP1 0x0 PRP2 0x0 00:21:46.109 [2024-07-24 23:59:05.521294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:05.521307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.109 [2024-07-24 23:59:05.521318] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.109 [2024-07-24 23:59:05.521329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74776 len:8 PRP1 0x0 PRP2 0x0 00:21:46.109 [2024-07-24 23:59:05.521342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:05.521354] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.109 [2024-07-24 23:59:05.521365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.109 [2024-07-24 23:59:05.521376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74784 len:8 PRP1 0x0 PRP2 0x0 00:21:46.109 [2024-07-24 23:59:05.521388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:05.521401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.109 [2024-07-24 23:59:05.521412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.109 [2024-07-24 23:59:05.521423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74792 len:8 PRP1 0x0 PRP2 0x0 00:21:46.109 [2024-07-24 23:59:05.521435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:05.521447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.109 [2024-07-24 23:59:05.521457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.109 [2024-07-24 23:59:05.521468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74800 len:8 PRP1 0x0 PRP2 0x0 00:21:46.109 [2024-07-24 23:59:05.521480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:05.521493] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.109 [2024-07-24 23:59:05.521503] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.109 [2024-07-24 23:59:05.521518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74808 len:8 PRP1 0x0 PRP2 0x0 00:21:46.109 [2024-07-24 23:59:05.521531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:05.521544] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.109 [2024-07-24 23:59:05.521554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.109 [2024-07-24 23:59:05.521565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:74816 len:8 PRP1 0x0 PRP2 0x0 00:21:46.109 [2024-07-24 23:59:05.521577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:05.521590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.109 [2024-07-24 23:59:05.521601] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.109 [2024-07-24 23:59:05.521612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74232 len:8 PRP1 0x0 PRP2 0x0 00:21:46.109 [2024-07-24 23:59:05.521624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:05.521686] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x62dd40 was disconnected and freed. reset controller. 00:21:46.109 [2024-07-24 23:59:05.521705] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:46.109 [2024-07-24 23:59:05.521739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.109 [2024-07-24 23:59:05.521757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:05.521772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.109 [2024-07-24 23:59:05.521784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:05.521799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.109 [2024-07-24 23:59:05.521811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:05.521824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.109 [2024-07-24 23:59:05.521837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:05.521850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.109 [2024-07-24 23:59:05.521904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ff0f0 (9): Bad file descriptor 00:21:46.109 [2024-07-24 23:59:05.525130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.109 [2024-07-24 23:59:05.598954] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:46.109 [2024-07-24 23:59:10.076456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:14168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:14216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:14240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:14248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:14264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.076982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.076996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.077010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.077024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.077037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.109 [2024-07-24 23:59:10.077051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.109 [2024-07-24 23:59:10.077064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:14336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:14432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.110 [2024-07-24 23:59:10.077688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.110 [2024-07-24 23:59:10.077715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:14480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.110 [2024-07-24 23:59:10.077749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.110 [2024-07-24 23:59:10.077777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:14496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.110 [2024-07-24 23:59:10.077805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.110 [2024-07-24 23:59:10.077833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.110 [2024-07-24 23:59:10.077861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:14520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.110 [2024-07-24 23:59:10.077889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.110 [2024-07-24 23:59:10.077916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.110 [2024-07-24 23:59:10.077944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.110 [2024-07-24 23:59:10.077972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.077986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.110 [2024-07-24 23:59:10.077999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.078013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.110 [2024-07-24 23:59:10.078026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.110 [2024-07-24 23:59:10.078040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.110 [2024-07-24 23:59:10.078053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:14576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:14608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:14624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:14696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:14704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:14712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:14736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:14768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:14792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:14808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:14816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.078982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.078997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.079011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.079026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:14832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.079039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.079054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.079067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.079082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.111 [2024-07-24 23:59:10.079096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.079132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.111 [2024-07-24 23:59:10.079150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14856 len:8 PRP1 0x0 PRP2 0x0 00:21:46.111 [2024-07-24 23:59:10.079163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.079181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.111 [2024-07-24 23:59:10.079193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.111 [2024-07-24 23:59:10.079204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14864 len:8 PRP1 0x0 PRP2 0x0 00:21:46.111 [2024-07-24 23:59:10.079218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.111 [2024-07-24 23:59:10.079237] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.111 [2024-07-24 23:59:10.079255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.111 [2024-07-24 23:59:10.079272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14872 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14888 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14896 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14904 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079484] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079544] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14920 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079591] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14928 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079634] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079645] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14936 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14952 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14960 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079835] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14968 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079871] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079920] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14984 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.079954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.079967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.079977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.079988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14992 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.080011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.080025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.080036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.080047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15000 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.080059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.080073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.080083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.080095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.080107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.080120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.080130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.080141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15016 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.080154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.080167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.080177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.080188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15024 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.080200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.080213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.080224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.080237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15032 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.080256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.080269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.080280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.080291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.080304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.080316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.080327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.112 [2024-07-24 23:59:10.080338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15048 len:8 PRP1 0x0 PRP2 0x0 00:21:46.112 [2024-07-24 23:59:10.080350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.112 [2024-07-24 23:59:10.080363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.112 [2024-07-24 23:59:10.080373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.080388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15056 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.080407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.080421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.080432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.080443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15064 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.080455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.080468] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.080479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.080490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.080502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.080514] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.080532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.080543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15080 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.080555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.080568] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.080578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.080589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15088 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.080602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.080615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.080625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.080636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15096 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.080648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.080661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.080673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.080683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.080695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.080708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.080718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.080738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15112 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.080751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.080764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.080778] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.080790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15120 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.080807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.080821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.080832] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.080843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15128 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.080855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.080868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.080879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.080889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.080902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.080914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.080925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.080936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15144 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.080949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.080962] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.080972] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.080983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15152 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.080996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.081020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.081030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.081041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15160 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.081053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.081066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.081082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.081093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14456 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.081106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.081118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:46.113 [2024-07-24 23:59:10.081129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:46.113 [2024-07-24 23:59:10.081140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:8 PRP1 0x0 PRP2 0x0 00:21:46.113 [2024-07-24 23:59:10.081152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.081212] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x62fb40 was disconnected and freed. reset controller. 00:21:46.113 [2024-07-24 23:59:10.081237] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:46.113 [2024-07-24 23:59:10.081278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.113 [2024-07-24 23:59:10.081297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.081312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.113 [2024-07-24 23:59:10.081325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.081339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.113 [2024-07-24 23:59:10.081352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.081365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.113 [2024-07-24 23:59:10.081378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.113 [2024-07-24 23:59:10.081390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.113 [2024-07-24 23:59:10.084686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.113 [2024-07-24 23:59:10.084725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ff0f0 (9): Bad file descriptor 00:21:46.113 [2024-07-24 23:59:10.117887] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:46.113 00:21:46.113 Latency(us) 00:21:46.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.113 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:46.113 Verification LBA range: start 0x0 length 0x4000 00:21:46.113 NVMe0n1 : 15.01 8585.84 33.54 360.14 0.00 14280.39 801.00 16699.54 00:21:46.113 =================================================================================================================== 00:21:46.113 Total : 8585.84 33.54 360.14 0.00 14280.39 801.00 16699.54 00:21:46.113 Received shutdown signal, test time was about 15.000000 seconds 00:21:46.113 00:21:46.113 Latency(us) 00:21:46.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.113 =================================================================================================================== 00:21:46.113 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.113 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:46.113 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:46.113 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:46.113 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3441270 00:21:46.113 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:46.113 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3441270 /var/tmp/bdevperf.sock 00:21:46.114 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 3441270 ']' 00:21:46.114 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.114 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.114 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.114 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.114 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:46.114 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:46.114 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:21:46.114 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:46.371 [2024-07-24 23:59:16.704079] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:46.371 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:46.371 [2024-07-24 23:59:16.948744] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:46.371 23:59:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:46.936 NVMe0n1 00:21:46.936 23:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:47.500 00:21:47.500 23:59:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:47.757 00:21:47.757 23:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:47.757 23:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:48.014 23:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:48.578 23:59:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:51.853 23:59:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:51.853 23:59:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:51.853 23:59:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3441939 00:21:51.853 23:59:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:51.853 23:59:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3441939 00:21:52.847 0 00:21:52.847 23:59:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:52.847 [2024-07-24 23:59:16.180739] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:21:52.847 [2024-07-24 23:59:16.180820] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3441270 ] 00:21:52.847 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.847 [2024-07-24 23:59:16.239191] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.847 [2024-07-24 23:59:16.345435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:52.847 [2024-07-24 23:59:18.864092] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:52.847 [2024-07-24 23:59:18.864177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.847 [2024-07-24 23:59:18.864199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.847 [2024-07-24 23:59:18.864248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.847 [2024-07-24 23:59:18.864266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.847 [2024-07-24 23:59:18.864280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.847 [2024-07-24 23:59:18.864293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.847 [2024-07-24 23:59:18.864307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.847 [2024-07-24 23:59:18.864321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.847 [2024-07-24 23:59:18.864344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:52.847 [2024-07-24 23:59:18.864406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:52.847 [2024-07-24 23:59:18.864441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5c0f0 (9): Bad file descriptor 00:21:52.847 [2024-07-24 23:59:18.918815] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:52.847 Running I/O for 1 seconds... 00:21:52.847 00:21:52.847 Latency(us) 00:21:52.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.847 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:52.847 Verification LBA range: start 0x0 length 0x4000 00:21:52.847 NVMe0n1 : 1.01 8533.62 33.33 0.00 0.00 14937.12 3034.07 12718.84 00:21:52.847 =================================================================================================================== 00:21:52.847 Total : 8533.62 33.33 0.00 0.00 14937.12 3034.07 12718.84 00:21:52.847 23:59:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:52.847 23:59:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:53.104 23:59:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.361 23:59:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:53.361 23:59:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:53.618 23:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:53.875 23:59:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:57.152 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:57.152 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:57.152 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3441270 00:21:57.152 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3441270 ']' 00:21:57.152 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3441270 00:21:57.152 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:57.152 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:57.152 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3441270 00:21:57.152 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:57.152 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:57.152 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3441270' 00:21:57.152 killing process with pid 3441270 00:21:57.152 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3441270 00:21:57.152 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3441270 00:21:57.410 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:57.410 23:59:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:57.667 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:57.667 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:57.667 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:57.667 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:57.667 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:57.667 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:57.667 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:57.667 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:57.667 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:57.667 rmmod nvme_tcp 00:21:57.667 rmmod nvme_fabrics 00:21:57.667 rmmod nvme_keyring 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 3439002 ']' 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 3439002 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 3439002 ']' 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 3439002 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3439002 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3439002' 00:21:57.924 killing process with pid 3439002 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 3439002 00:21:57.924 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 3439002 00:21:58.182 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:58.182 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:58.182 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:58.182 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.182 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:58.182 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.182 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:58.182 23:59:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.080 23:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:00.080 00:22:00.080 real 0m35.629s 00:22:00.080 user 2m6.115s 00:22:00.080 sys 0m5.763s 00:22:00.080 23:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:00.080 23:59:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:00.080 ************************************ 00:22:00.080 END TEST nvmf_failover 00:22:00.080 ************************************ 00:22:00.080 23:59:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:00.080 23:59:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:00.080 23:59:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.080 23:59:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:00.338 ************************************ 00:22:00.338 START TEST nvmf_host_discovery 00:22:00.338 ************************************ 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:00.338 * Looking for test storage... 00:22:00.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:22:00.338 23:59:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.235 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:02.235 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:02.236 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:02.236 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:02.236 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:02.236 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:02.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:02.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:22:02.236 00:22:02.236 --- 10.0.0.2 ping statistics --- 00:22:02.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.236 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:02.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:02.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:22:02.236 00:22:02.236 --- 10.0.0.1 ping statistics --- 00:22:02.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:02.236 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:02.236 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:22:02.237 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:02.237 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:02.237 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:02.237 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:02.237 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:02.237 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:02.237 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:02.495 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:02.495 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:02.495 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:02.495 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.495 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=3444545 00:22:02.495 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:02.495 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 3444545 00:22:02.495 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3444545 ']' 00:22:02.495 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:02.495 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.495 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:02.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:02.495 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.495 23:59:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.495 [2024-07-24 23:59:32.922877] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:22:02.495 [2024-07-24 23:59:32.922975] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:02.495 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.495 [2024-07-24 23:59:32.989957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.495 [2024-07-24 23:59:33.098384] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:02.495 [2024-07-24 23:59:33.098457] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:02.495 [2024-07-24 23:59:33.098486] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:02.495 [2024-07-24 23:59:33.098499] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:02.495 [2024-07-24 23:59:33.098508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:02.495 [2024-07-24 23:59:33.098551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.753 [2024-07-24 23:59:33.249270] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.753 [2024-07-24 23:59:33.257462] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.753 null0 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.753 null1 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=3444689 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 3444689 /tmp/host.sock 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 3444689 ']' 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:02.753 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:02.753 23:59:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:02.753 [2024-07-24 23:59:33.330693] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:22:02.753 [2024-07-24 23:59:33.330774] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3444689 ] 00:22:02.753 EAL: No free 2048 kB hugepages reported on node 1 00:22:03.010 [2024-07-24 23:59:33.392324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:03.011 [2024-07-24 23:59:33.508340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:03.942 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:03.943 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.200 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.201 [2024-07-24 23:59:34.577059] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:22:04.201 23:59:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:04.765 [2024-07-24 23:59:35.349041] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:04.765 [2024-07-24 23:59:35.349074] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:04.765 [2024-07-24 23:59:35.349096] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:05.023 [2024-07-24 23:59:35.437394] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:05.023 [2024-07-24 23:59:35.540032] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:05.023 [2024-07-24 23:59:35.540056] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:05.281 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:05.282 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.540 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:05.541 23:59:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.541 [2024-07-24 23:59:36.065357] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:05.541 [2024-07-24 23:59:36.065933] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:05.541 [2024-07-24 23:59:36.065973] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:05.541 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.799 [2024-07-24 23:59:36.153705] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:05.799 23:59:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:22:05.799 [2024-07-24 23:59:36.213222] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:05.799 [2024-07-24 23:59:36.213259] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:05.799 [2024-07-24 23:59:36.213272] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.732 [2024-07-24 23:59:37.289821] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:06.732 [2024-07-24 23:59:37.289865] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:06.732 [2024-07-24 23:59:37.297084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.732 [2024-07-24 23:59:37.297132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.732 [2024-07-24 23:59:37.297149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.732 [2024-07-24 23:59:37.297163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.732 [2024-07-24 23:59:37.297185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.732 [2024-07-24 23:59:37.297199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.732 [2024-07-24 23:59:37.297214] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:06.732 [2024-07-24 23:59:37.297235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:06.732 [2024-07-24 23:59:37.297256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248fc20 is same with the state(5) to be set 00:22:06.732 [2024-07-24 23:59:37.307092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248fc20 (9): Bad file descriptor 00:22:06.732 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.733 [2024-07-24 23:59:37.317136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.733 [2024-07-24 23:59:37.317399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.733 [2024-07-24 23:59:37.317430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248fc20 with addr=10.0.0.2, port=4420 00:22:06.733 [2024-07-24 23:59:37.317447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248fc20 is same with the state(5) to be set 00:22:06.733 [2024-07-24 23:59:37.317470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248fc20 (9): Bad file descriptor 00:22:06.733 [2024-07-24 23:59:37.317504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.733 [2024-07-24 23:59:37.317521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.733 [2024-07-24 23:59:37.317546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.733 [2024-07-24 23:59:37.317566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.733 [2024-07-24 23:59:37.327222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.733 [2024-07-24 23:59:37.327465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.733 [2024-07-24 23:59:37.327494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248fc20 with addr=10.0.0.2, port=4420 00:22:06.733 [2024-07-24 23:59:37.327537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248fc20 is same with the state(5) to be set 00:22:06.733 [2024-07-24 23:59:37.327563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248fc20 (9): Bad file descriptor 00:22:06.733 [2024-07-24 23:59:37.327586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.733 [2024-07-24 23:59:37.327602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.733 [2024-07-24 23:59:37.327616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.733 [2024-07-24 23:59:37.327647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.733 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.733 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.733 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:06.733 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:06.733 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.733 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.733 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:06.733 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:06.733 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.733 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:06.733 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.733 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:06.733 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.733 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:06.733 [2024-07-24 23:59:37.337322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.733 [2024-07-24 23:59:37.337508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.733 [2024-07-24 23:59:37.337541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248fc20 with addr=10.0.0.2, port=4420 00:22:06.733 [2024-07-24 23:59:37.337558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248fc20 is same with the state(5) to be set 00:22:06.733 [2024-07-24 23:59:37.337580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248fc20 (9): Bad file descriptor 00:22:06.733 [2024-07-24 23:59:37.338497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.733 [2024-07-24 23:59:37.338536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.733 [2024-07-24 23:59:37.338552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.733 [2024-07-24 23:59:37.338589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.991 [2024-07-24 23:59:37.347400] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.991 [2024-07-24 23:59:37.347589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.991 [2024-07-24 23:59:37.347617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248fc20 with addr=10.0.0.2, port=4420 00:22:06.991 [2024-07-24 23:59:37.347633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248fc20 is same with the state(5) to be set 00:22:06.991 [2024-07-24 23:59:37.347656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248fc20 (9): Bad file descriptor 00:22:06.991 [2024-07-24 23:59:37.347688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.991 [2024-07-24 23:59:37.347704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.991 [2024-07-24 23:59:37.347717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.991 [2024-07-24 23:59:37.347736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.991 [2024-07-24 23:59:37.357471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.991 [2024-07-24 23:59:37.357713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.991 [2024-07-24 23:59:37.357741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248fc20 with addr=10.0.0.2, port=4420 00:22:06.991 [2024-07-24 23:59:37.357757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248fc20 is same with the state(5) to be set 00:22:06.991 [2024-07-24 23:59:37.357780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248fc20 (9): Bad file descriptor 00:22:06.991 [2024-07-24 23:59:37.357832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.991 [2024-07-24 23:59:37.357851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.991 [2024-07-24 23:59:37.357865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.991 [2024-07-24 23:59:37.357884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.991 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.991 [2024-07-24 23:59:37.367563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:06.991 [2024-07-24 23:59:37.367751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:06.991 [2024-07-24 23:59:37.367781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x248fc20 with addr=10.0.0.2, port=4420 00:22:06.991 [2024-07-24 23:59:37.367799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248fc20 is same with the state(5) to be set 00:22:06.992 [2024-07-24 23:59:37.367823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x248fc20 (9): Bad file descriptor 00:22:06.992 [2024-07-24 23:59:37.367845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:06.992 [2024-07-24 23:59:37.367859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:06.992 [2024-07-24 23:59:37.367874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:06.992 [2024-07-24 23:59:37.367908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:22:06.992 [2024-07-24 23:59:37.375907] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:06.992 [2024-07-24 23:59:37.375943] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.992 23:59:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.364 [2024-07-24 23:59:38.657433] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:08.364 [2024-07-24 23:59:38.657466] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:08.364 [2024-07-24 23:59:38.657490] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:08.364 [2024-07-24 23:59:38.743802] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:08.364 [2024-07-24 23:59:38.974681] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:08.364 [2024-07-24 23:59:38.974724] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:08.364 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.364 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.364 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:08.364 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.622 request: 00:22:08.622 { 00:22:08.622 "name": "nvme", 00:22:08.622 "trtype": "tcp", 00:22:08.622 "traddr": "10.0.0.2", 00:22:08.622 "adrfam": "ipv4", 00:22:08.622 "trsvcid": "8009", 00:22:08.622 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:08.622 "wait_for_attach": true, 00:22:08.622 "method": "bdev_nvme_start_discovery", 00:22:08.622 "req_id": 1 00:22:08.622 } 00:22:08.622 Got JSON-RPC error response 00:22:08.622 response: 00:22:08.622 { 00:22:08.622 "code": -17, 00:22:08.622 "message": "File exists" 00:22:08.622 } 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:08.622 23:59:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.622 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.622 request: 00:22:08.622 { 00:22:08.622 "name": "nvme_second", 00:22:08.622 "trtype": "tcp", 00:22:08.622 "traddr": "10.0.0.2", 00:22:08.622 "adrfam": "ipv4", 00:22:08.622 "trsvcid": "8009", 00:22:08.622 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:08.622 "wait_for_attach": true, 00:22:08.623 "method": "bdev_nvme_start_discovery", 00:22:08.623 "req_id": 1 00:22:08.623 } 00:22:08.623 Got JSON-RPC error response 00:22:08.623 response: 00:22:08.623 { 00:22:08.623 "code": -17, 00:22:08.623 "message": "File exists" 00:22:08.623 } 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.623 23:59:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:09.994 [2024-07-24 23:59:40.170150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:09.994 [2024-07-24 23:59:40.170223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2493030 with addr=10.0.0.2, port=8010 00:22:09.994 [2024-07-24 23:59:40.170265] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:09.994 [2024-07-24 23:59:40.170299] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:09.994 [2024-07-24 23:59:40.170312] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:10.926 [2024-07-24 23:59:41.172554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:10.926 [2024-07-24 23:59:41.172619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2493030 with addr=10.0.0.2, port=8010 00:22:10.926 [2024-07-24 23:59:41.172651] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:10.926 [2024-07-24 23:59:41.172665] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:10.926 [2024-07-24 23:59:41.172678] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:11.859 [2024-07-24 23:59:42.174747] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:11.859 request: 00:22:11.859 { 00:22:11.859 "name": "nvme_second", 00:22:11.859 "trtype": "tcp", 00:22:11.859 "traddr": "10.0.0.2", 00:22:11.859 "adrfam": "ipv4", 00:22:11.859 "trsvcid": "8010", 00:22:11.859 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:11.859 "wait_for_attach": false, 00:22:11.859 "attach_timeout_ms": 3000, 00:22:11.859 "method": "bdev_nvme_start_discovery", 00:22:11.859 "req_id": 1 00:22:11.859 } 00:22:11.859 Got JSON-RPC error response 00:22:11.859 response: 00:22:11.859 { 00:22:11.859 "code": -110, 00:22:11.859 "message": "Connection timed out" 00:22:11.859 } 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 3444689 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:11.859 rmmod nvme_tcp 00:22:11.859 rmmod nvme_fabrics 00:22:11.859 rmmod nvme_keyring 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 3444545 ']' 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 3444545 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 3444545 ']' 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 3444545 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3444545 00:22:11.859 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:11.860 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:11.860 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3444545' 00:22:11.860 killing process with pid 3444545 00:22:11.860 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 3444545 00:22:11.860 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 3444545 00:22:12.118 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:12.118 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:12.118 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:12.118 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:12.118 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:12.118 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.118 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:12.118 23:59:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.049 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:14.049 00:22:14.049 real 0m13.939s 00:22:14.049 user 0m20.792s 00:22:14.049 sys 0m2.831s 00:22:14.049 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:14.049 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:14.049 ************************************ 00:22:14.049 END TEST nvmf_host_discovery 00:22:14.049 ************************************ 00:22:14.306 23:59:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:14.306 23:59:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:14.306 23:59:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:14.306 23:59:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:14.306 ************************************ 00:22:14.306 START TEST nvmf_host_multipath_status 00:22:14.306 ************************************ 00:22:14.306 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:14.306 * Looking for test storage... 00:22:14.306 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:14.306 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.306 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:14.306 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:22:14.307 23:59:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:16.206 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:16.207 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:16.207 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:16.207 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:16.207 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.207 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:16.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:22:16.465 00:22:16.465 --- 10.0.0.2 ping statistics --- 00:22:16.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.465 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:22:16.465 00:22:16.465 --- 10.0.0.1 ping statistics --- 00:22:16.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.465 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=3447846 00:22:16.465 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:16.466 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 3447846 00:22:16.466 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3447846 ']' 00:22:16.466 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.466 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.466 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.466 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.466 23:59:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:16.466 [2024-07-24 23:59:46.963302] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:22:16.466 [2024-07-24 23:59:46.963378] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.466 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.466 [2024-07-24 23:59:47.025661] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:16.724 [2024-07-24 23:59:47.134705] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.724 [2024-07-24 23:59:47.134768] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.724 [2024-07-24 23:59:47.134786] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.724 [2024-07-24 23:59:47.134797] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.724 [2024-07-24 23:59:47.134806] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.724 [2024-07-24 23:59:47.134932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.724 [2024-07-24 23:59:47.134937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:16.724 23:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.724 23:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:16.724 23:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:16.724 23:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:16.724 23:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:16.724 23:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:16.724 23:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3447846 00:22:16.724 23:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:16.981 [2024-07-24 23:59:47.554843] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.981 23:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:17.240 Malloc0 00:22:17.497 23:59:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:22:17.755 23:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:18.012 23:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.012 [2024-07-24 23:59:48.609385] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.269 23:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:18.269 [2024-07-24 23:59:48.858009] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:18.269 23:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3448015 00:22:18.269 23:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:22:18.269 23:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:18.269 23:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3448015 /var/tmp/bdevperf.sock 00:22:18.269 23:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 3448015 ']' 00:22:18.269 23:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:18.269 23:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:18.269 23:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:18.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:18.269 23:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:18.269 23:59:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:18.834 23:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.834 23:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:22:18.835 23:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:18.835 23:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:19.399 Nvme0n1 00:22:19.399 23:59:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:19.964 Nvme0n1 00:22:19.964 23:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:19.964 23:59:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:21.862 23:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:21.863 23:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:22.120 23:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:22.378 23:59:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:23.309 23:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:23.309 23:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:23.309 23:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.309 23:59:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:23.567 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.567 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:23.567 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.567 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:23.824 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:23.824 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:23.824 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.824 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:24.082 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.082 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:24.082 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.082 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:24.340 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.340 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:24.340 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.340 23:59:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:24.596 23:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.596 23:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:24.596 23:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:24.596 23:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:24.854 23:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.854 23:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:24.854 23:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:25.112 23:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:25.370 23:59:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:26.742 23:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:26.742 23:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:26.742 23:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.742 23:59:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:26.742 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:26.742 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:26.742 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.742 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:27.000 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.000 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:27.000 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.000 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:27.258 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.258 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:27.258 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.258 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:27.516 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.516 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:27.516 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.516 23:59:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:27.774 23:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.774 23:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:27.774 23:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:27.774 23:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:28.031 23:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:28.031 23:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:28.031 23:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:28.288 23:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:28.546 23:59:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:29.478 23:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:29.478 23:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:29.478 23:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.478 23:59:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:29.736 00:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.736 00:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:29.736 00:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.736 00:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:29.994 00:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:29.994 00:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:29.994 00:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.994 00:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:30.252 00:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.252 00:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:30.252 00:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.252 00:00:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:30.510 00:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.510 00:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:30.510 00:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.510 00:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:30.768 00:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.768 00:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:30.768 00:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.768 00:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:31.025 00:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:31.025 00:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:31.025 00:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:31.287 00:00:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:31.570 00:00:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:32.501 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:32.501 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:32.501 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.501 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:32.760 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.760 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:32.760 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.760 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:33.016 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:33.017 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:33.017 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.017 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:33.273 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.273 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:33.273 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.273 00:00:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:33.530 00:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.530 00:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:33.530 00:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.530 00:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:33.787 00:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.787 00:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:33.787 00:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.787 00:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:34.044 00:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:34.044 00:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:34.044 00:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:34.302 00:00:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:34.558 00:00:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:35.926 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:35.926 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:35.926 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.926 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:35.926 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:35.926 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:35.926 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:35.926 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:36.182 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:36.182 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:36.182 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.182 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:36.438 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.438 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:36.438 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.438 00:00:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:36.695 00:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:36.695 00:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:36.695 00:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.695 00:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:36.952 00:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:36.952 00:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:36.952 00:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:36.952 00:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:37.209 00:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:37.209 00:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:37.209 00:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:37.465 00:00:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:37.722 00:00:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:38.656 00:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:38.656 00:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:38.656 00:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.656 00:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:38.912 00:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:38.912 00:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:38.912 00:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:38.912 00:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:39.169 00:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.169 00:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:39.169 00:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.169 00:00:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:39.427 00:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.427 00:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:39.427 00:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.427 00:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:39.684 00:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:39.684 00:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:39.684 00:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.684 00:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:39.941 00:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:39.941 00:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:39.941 00:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:39.941 00:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:40.197 00:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:40.197 00:00:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:40.455 00:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:40.455 00:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:40.711 00:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:40.968 00:00:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:42.340 00:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:42.340 00:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:42.340 00:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.340 00:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:42.340 00:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.340 00:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:42.340 00:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.340 00:00:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:42.598 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.598 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:42.598 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.598 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:42.855 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:42.855 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:42.855 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:42.855 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:43.112 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.113 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:43.113 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.113 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:43.370 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.370 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:43.370 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:43.370 00:00:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:43.626 00:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:43.626 00:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:43.626 00:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:43.883 00:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:44.140 00:00:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:45.073 00:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:45.073 00:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:45.073 00:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.073 00:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:45.331 00:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:45.331 00:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:45.331 00:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.331 00:00:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:45.589 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.589 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:45.589 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.589 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:45.846 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:45.846 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:45.846 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:45.846 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:46.102 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.103 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:46.103 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.103 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:46.360 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.360 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:46.360 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:46.360 00:00:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:46.618 00:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:46.618 00:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:46.618 00:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:46.877 00:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:47.135 00:00:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:48.066 00:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:48.066 00:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:48.066 00:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.066 00:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:48.323 00:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.323 00:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:48.323 00:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.323 00:00:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:48.580 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.580 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:48.580 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.580 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:48.847 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:48.847 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:48.847 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:48.847 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:49.116 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.116 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:49.116 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.117 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:49.373 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.374 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:49.374 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:49.374 00:00:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:49.629 00:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:49.629 00:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:49.629 00:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:49.886 00:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:50.144 00:00:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:51.078 00:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:51.078 00:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:51.078 00:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.078 00:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:51.335 00:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.335 00:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:51.335 00:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.335 00:00:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:51.592 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:51.592 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:51.592 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.592 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:51.848 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:51.848 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:51.848 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:51.848 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:52.105 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.105 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:52.105 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.105 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:52.362 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:52.362 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:52.362 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:52.362 00:00:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:52.618 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:52.618 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3448015 00:22:52.618 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3448015 ']' 00:22:52.618 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3448015 00:22:52.618 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:52.618 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.618 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3448015 00:22:52.618 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:52.618 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:52.618 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3448015' 00:22:52.618 killing process with pid 3448015 00:22:52.618 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3448015 00:22:52.618 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3448015 00:22:52.889 Connection closed with partial response: 00:22:52.889 00:22:52.889 00:22:52.889 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3448015 00:22:52.889 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:52.889 [2024-07-24 23:59:48.921586] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:22:52.889 [2024-07-24 23:59:48.921666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3448015 ] 00:22:52.889 EAL: No free 2048 kB hugepages reported on node 1 00:22:52.889 [2024-07-24 23:59:48.980208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.889 [2024-07-24 23:59:49.088894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.889 Running I/O for 90 seconds... 00:22:52.889 [2024-07-25 00:00:04.889100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:66160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.889 [2024-07-25 00:00:04.889156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:52.889 [2024-07-25 00:00:04.889222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:66168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.889 [2024-07-25 00:00:04.889261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:52.889 [2024-07-25 00:00:04.889287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.889 [2024-07-25 00:00:04.889303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:52.889 [2024-07-25 00:00:04.889326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:66184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.889 [2024-07-25 00:00:04.889342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:52.889 [2024-07-25 00:00:04.889364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:66192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.889 [2024-07-25 00:00:04.889381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:52.889 [2024-07-25 00:00:04.889402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:66200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.889 [2024-07-25 00:00:04.889418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:52.889 [2024-07-25 00:00:04.889440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:66208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.889 [2024-07-25 00:00:04.889457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:52.889 [2024-07-25 00:00:04.889478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:66216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.889 [2024-07-25 00:00:04.889494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.889 [2024-07-25 00:00:04.889515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:66224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.889 [2024-07-25 00:00:04.889546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.889 [2024-07-25 00:00:04.889569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.889 [2024-07-25 00:00:04.889585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:52.889 [2024-07-25 00:00:04.889606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:66240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.889 [2024-07-25 00:00:04.889631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:52.889 [2024-07-25 00:00:04.889654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.889 [2024-07-25 00:00:04.889670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:52.889 [2024-07-25 00:00:04.889690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:66256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.889 [2024-07-25 00:00:04.889705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:52.889 [2024-07-25 00:00:04.889726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:66264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.889741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.889762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.889777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.889797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.889812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.889833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:66288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.889848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.889868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.889883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.889904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.889918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.889939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:66312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.889954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.889975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:66320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.889990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:66328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:66344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:66360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:66368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:66376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:66384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:66408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:66416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:66424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:66432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:66440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:66456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:66464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:66480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:66512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.890977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.890991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:66536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:66544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:66552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:66584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:66592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:66600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:66608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:66616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:66624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:66632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:66640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:66656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:66672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:66688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.891971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.891996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:66696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.892012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.892037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.892052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.892077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.892093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.892118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:66720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.892134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.892159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.892175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.892204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.892221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.892256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.892278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.892318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.892339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.892367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.892383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.892409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.892425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.892450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.890 [2024-07-25 00:00:04.892466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.892493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.890 [2024-07-25 00:00:04.892509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.892534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.890 [2024-07-25 00:00:04.892550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.890 [2024-07-25 00:00:04.892575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.892605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.892630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.892646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.892670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.892686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.892710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.892725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.892755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.892772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.892804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.892820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.892844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.892860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.892884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.892899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.892924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.892940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.892963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.892978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.893966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.893987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.894035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.894081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:04.894864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:04.894893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.891 [2024-07-25 00:00:04.894909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:49088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:52.891 [2024-07-25 00:00:20.569825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.891 [2024-07-25 00:00:20.569840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.569861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.569893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.569914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.569930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.569950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.569965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.569987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:49464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:49496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:49528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.892 [2024-07-25 00:00:20.570568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:52.892 [2024-07-25 00:00:20.570589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.570605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.570625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.570641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.570662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.570678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.571890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:49720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.571914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.571940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.571958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.571985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:49752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:49784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.572273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:49936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.572805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.572841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.572876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.572933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.572956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.572972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.574605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.893 [2024-07-25 00:00:20.574746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:49104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.574782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.574817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.574852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.574887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.574923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.574957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.574978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.574992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.575017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.575032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.575052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:49352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.575083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.575105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.575120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.575141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:49416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.893 [2024-07-25 00:00:20.575156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:52.893 [2024-07-25 00:00:20.575177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.575193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.575214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:49480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.575229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.575257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.575275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.575296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:49544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.575312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.575333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.575349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.575370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:49608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.575385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.575406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.575421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.575442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:49672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.575457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.575483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.575499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.578278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.578324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.578361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.578398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.578435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.578471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.578508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.578544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:49992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.578583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.578620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:50056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.578656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.578699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.578737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.578773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:49736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.578809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.578846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:49800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.578883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.578921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.578957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.578978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.578993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.579030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.579067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.579103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.579144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.579183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.579220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.579268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.579306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.579744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.579788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.579826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.579863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.579899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.579936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:49496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.579972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.579994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.580014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:49624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.580052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.580089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.580141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.580178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.580213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.580274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.580315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.580352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.580389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.580426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.580464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.580501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.580555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.580593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.580630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.580667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.580703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.894 [2024-07-25 00:00:20.580740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:49136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.580776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.580812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:49264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.580849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.580885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.894 [2024-07-25 00:00:20.580923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:52.894 [2024-07-25 00:00:20.580944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.580960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.580986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:49512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.581003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.581024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.581040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.581077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:49640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.581092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.581114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.581145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.582973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:50120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.582999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.583044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.583082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.583119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.583156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.583193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.583229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.583561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.583598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.583649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.583684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.583718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:49832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:49952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.583984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.583998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.584032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.584067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.584109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.584144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.584179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:49184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.584213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.584280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.584317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.584353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.584390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.584426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.584463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.584500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.584551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.584588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.584639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.584684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.584704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.584719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.585586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:49200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.585610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.585636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.585653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.585690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:49448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.585705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.585725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.585740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.585760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.585775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.587517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.587556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.587581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.587611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.587633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.895 [2024-07-25 00:00:20.587648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.587668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.587682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.587702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.587731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.587753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.587768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.895 [2024-07-25 00:00:20.587804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.895 [2024-07-25 00:00:20.587820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.587841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.587862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.587884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.587900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.587921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.587937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.587957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.587973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.587993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.588009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.588045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.588081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:49752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.588132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.588182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.588216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.588287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.588323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.588364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.588402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.588438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.588474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.588511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:49768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.588547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.588583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.588620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.588656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.588692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.588728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.588765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.588817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.588859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.588893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.588928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.588963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.588984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.588998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.589018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.589033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.589053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.589068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.589088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.589118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.589139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.589153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.589173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.589188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.589208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:49576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.589222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.590132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.590170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.590200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.590217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.590267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.590285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.590306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.590322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.590344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.590360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.590381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.590397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.590419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.590434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.590455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.590470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.590491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.590507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.591139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.591182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:49136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.591220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.591266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.591309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.591347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.591383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.591420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.591457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.591492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.591551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.591602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.591638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.591688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.591739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.591778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.591818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.591856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.591892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.591928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.896 [2024-07-25 00:00:20.591964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:52.896 [2024-07-25 00:00:20.591985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.896 [2024-07-25 00:00:20.592000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.592021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.592037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.592058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.592073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.592094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.592109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.592144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.592160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.592180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:49560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.592195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.593696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.593722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.593749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.593766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.593793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:50296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.593810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.593831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.593847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.593867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:49328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.593882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.593904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.593918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.593939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.593969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.593990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.594005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.594056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.594107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.594145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.594181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.594217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.594265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.594316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.594353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.594390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.594426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.594462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.594498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.594550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.594602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:49384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.594638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.594672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.594706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.594740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.594779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.594813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.594847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.594880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.594914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.594948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.594968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:49888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.594982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.595001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.595016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.597959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:49200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.597983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.598040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:50536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.598093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.598130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.598171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.598209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:50552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.598254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.598294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:50616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.598331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.598367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:50856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.598403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.598439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:50888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.598475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.598512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.598548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.598599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.598654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.598691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.598726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.598776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.598811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.598864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.598899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.598935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.598971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.598992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.599007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.599028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.897 [2024-07-25 00:00:20.599043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.599064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.599079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.897 [2024-07-25 00:00:20.599100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.897 [2024-07-25 00:00:20.599115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.599157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.599173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.599209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.599225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.599255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.599273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.599298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.599314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.599336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.599351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.599372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:50688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.599387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.599408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.599423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.599444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.599459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.599480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.599496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.599517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.599532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.599572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.599588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.599610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.599640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.600499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.600545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.600571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.600603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.600625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.600640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.600660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.600674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.600694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.600709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.600729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.600743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.600763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.600777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.600813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.600828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.600863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.600879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.600900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.600916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.600936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.600952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.600972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.600987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.601007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:51104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.601027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.601049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.601065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.601086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.601101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.601754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:50664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.601777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.601818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.601835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.601855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.601870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.601890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.601905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.601925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.601939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.601959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.601973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.601993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.602023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.602075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.602113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.602157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.602196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.602233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.602288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.602324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.602361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.602397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.602433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.602469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.602505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.602557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.602607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.602643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.898 [2024-07-25 00:00:20.602682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.602717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.602751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.602786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.602820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.602840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.602870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.603817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:50784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.603840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.603864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.603881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.603902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:50848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.603917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.603953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.603968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.603988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:50912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.604019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:52.898 [2024-07-25 00:00:20.604041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.898 [2024-07-25 00:00:20.604056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.604083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:50544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.604104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.604125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.604140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.604161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.604176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.604197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.604213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.604233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.604256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.604279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.604294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.604315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.604331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.604352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.604368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:50768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.605456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.605501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.605538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.605575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.605617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.605655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:50336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.605691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.605727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:51176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.605764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.605799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.605835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.605872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.605907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.605943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.605964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.605980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.606031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.606072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.606108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.606158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.606193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.606250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.606316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.606354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.606391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.606428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.606464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:51080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.606500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.606554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.606605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.606646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:50224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.606680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.606715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.606748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.606782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.606802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.606816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.609106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.609166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.609204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.609240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:51360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.609289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.609326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.609368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.609405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:51168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.609441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.609477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.609513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.609549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.609586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.609622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.609658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.609694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.609730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.609782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.609822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.609859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:51272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.899 [2024-07-25 00:00:20.609895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.609945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.609966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:51048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.609981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.610001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.610015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:52.899 [2024-07-25 00:00:20.610035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.899 [2024-07-25 00:00:20.610049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.610068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.610083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.610103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.610117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.610137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.610151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.610171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:50920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.610185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.610204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.610219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.610266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.610301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.610326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.610342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.610362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.610378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.610398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.610413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.610435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.610450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.612351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.612394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.612431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.612468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:51488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.612504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.612556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.612606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.612641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.612681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.612731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.612767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.612820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.612856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.612893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.612929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.612964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.612985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.613001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.613022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.613037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.613057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.613073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.613093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.613109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.613134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:50808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.613150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.613171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.613187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.613208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.613223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.613253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.613270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.613293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.613309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.613803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.613826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.613851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.613869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.613891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.613906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.613927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.613942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.613964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:50856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.613979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.614000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.614029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.614050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:51408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.614065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.614085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.614104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.614125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:50408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.614139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.614159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.614174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.614193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.614208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.614254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.614272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.614309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:51664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.614326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.614348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.614363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.614385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.614400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.615621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:50960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.615646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.615672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.615689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.615709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.615724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.615760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.615777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.615798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.615819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.615841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.900 [2024-07-25 00:00:20.615857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.615878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.615893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.615914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.615930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.615950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.615966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:52.900 [2024-07-25 00:00:20.615987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.900 [2024-07-25 00:00:20.616002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.616038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.616075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.616111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.616147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.616183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.616236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:51600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.616299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.616341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.616377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.616414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.616450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.616487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.616537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.616573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.616624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.616658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.616691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.616725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.616759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.616798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.616832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.616852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.616867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:50992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.619417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.619461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:51448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.619498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.619535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:51768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.619571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.619607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:51800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.619660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.619710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:51832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.619745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.619785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:51864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.619821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.619855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.619889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.619922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:51736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.619956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.619976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.619990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.620092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.620144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.620236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:50984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:51616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.620432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:51496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:51328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.620741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.620757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.621793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:51896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.621816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.621843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:51912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.621860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.621882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.621898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.621918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:51608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.621933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.621954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:51640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.621970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.621990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.901 [2024-07-25 00:00:20.622006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.622026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.622041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.622062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.622091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.622113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.901 [2024-07-25 00:00:20.622142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:52.901 [2024-07-25 00:00:20.622163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.622177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.622198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.622213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.622955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:51696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.622979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:51728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.623056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:51488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.623111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.623148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.623184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.623220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.623267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.623305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.623341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:51584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.623378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:51416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.623414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:51752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.623450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:51784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.623492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:51816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.623529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:51848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.623581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:51880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.623618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:51704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.623669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.623703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:51008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.623737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:51568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.623772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:51344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.623805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.623839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:50576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.623873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:51232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.623906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:51528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.623945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.623980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:51592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.623996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.624018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:51392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.624033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.624653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:51312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.624676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.624702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.624735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.624757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:51240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.624787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.624808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:51632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:52.902 [2024-07-25 00:00:20.624822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.624842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.624856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.624875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.624889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.624909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.624923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:52.902 [2024-07-25 00:00:20.624944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:52.902 [2024-07-25 00:00:20.624958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:52.902 Received shutdown signal, test time was about 32.581465 seconds 00:22:52.902 00:22:52.902 Latency(us) 00:22:52.902 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.902 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:52.902 Verification LBA range: start 0x0 length 0x4000 00:22:52.902 Nvme0n1 : 32.58 7899.27 30.86 0.00 0.00 16178.54 421.74 4026531.84 00:22:52.902 =================================================================================================================== 00:22:52.902 Total : 7899.27 30.86 0.00 0.00 16178.54 421.74 4026531.84 00:22:52.902 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:53.158 rmmod nvme_tcp 00:22:53.158 rmmod nvme_fabrics 00:22:53.158 rmmod nvme_keyring 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 3447846 ']' 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 3447846 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 3447846 ']' 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 3447846 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3447846 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3447846' 00:22:53.158 killing process with pid 3447846 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 3447846 00:22:53.158 00:00:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 3447846 00:22:53.722 00:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:53.722 00:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:53.722 00:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:53.722 00:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:53.722 00:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:53.722 00:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.722 00:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.722 00:00:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.618 00:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:55.618 00:22:55.618 real 0m41.414s 00:22:55.618 user 2m4.933s 00:22:55.618 sys 0m10.470s 00:22:55.618 00:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:55.619 ************************************ 00:22:55.619 END TEST nvmf_host_multipath_status 00:22:55.619 ************************************ 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:55.619 ************************************ 00:22:55.619 START TEST nvmf_discovery_remove_ifc 00:22:55.619 ************************************ 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:55.619 * Looking for test storage... 00:22:55.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:55.619 00:00:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:58.145 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:58.145 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:58.145 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:58.145 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:58.145 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.145 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:22:58.145 00:22:58.145 --- 10.0.0.2 ping statistics --- 00:22:58.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.145 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:58.145 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.145 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:22:58.145 00:22:58.145 --- 10.0.0.1 ping statistics --- 00:22:58.145 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.145 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=3454835 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:58.145 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 3454835 00:22:58.146 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3454835 ']' 00:22:58.146 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.146 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.146 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.146 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.146 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.146 [2024-07-25 00:00:28.476189] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:22:58.146 [2024-07-25 00:00:28.476304] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.146 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.146 [2024-07-25 00:00:28.542621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.146 [2024-07-25 00:00:28.650177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.146 [2024-07-25 00:00:28.650225] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.146 [2024-07-25 00:00:28.650262] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.146 [2024-07-25 00:00:28.650274] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.146 [2024-07-25 00:00:28.650284] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.146 [2024-07-25 00:00:28.650318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.403 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.403 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:58.403 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:58.403 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:58.403 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.403 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.403 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:58.404 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.404 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.404 [2024-07-25 00:00:28.793179] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.404 [2024-07-25 00:00:28.801389] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:58.404 null0 00:22:58.404 [2024-07-25 00:00:28.833324] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.404 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.404 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=3454980 00:22:58.404 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:58.404 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 3454980 /tmp/host.sock 00:22:58.404 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 3454980 ']' 00:22:58.404 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:22:58.404 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.404 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:58.404 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:58.404 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.404 00:00:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.404 [2024-07-25 00:00:28.897396] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:22:58.404 [2024-07-25 00:00:28.897477] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3454980 ] 00:22:58.404 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.404 [2024-07-25 00:00:28.958187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.661 [2024-07-25 00:00:29.074909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.661 00:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.661 00:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:22:58.661 00:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:58.661 00:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:58.661 00:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.661 00:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.661 00:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.661 00:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:58.661 00:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.661 00:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:58.661 00:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.661 00:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:58.661 00:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.661 00:00:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:00.031 [2024-07-25 00:00:30.262085] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:00.031 [2024-07-25 00:00:30.262120] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:00.031 [2024-07-25 00:00:30.262142] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:00.031 [2024-07-25 00:00:30.349440] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:00.031 [2024-07-25 00:00:30.411870] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:00.031 [2024-07-25 00:00:30.411927] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:00.031 [2024-07-25 00:00:30.411965] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:00.031 [2024-07-25 00:00:30.411988] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:00.031 [2024-07-25 00:00:30.412020] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:00.031 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.031 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:00.031 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:00.031 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.031 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:00.031 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.031 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:00.032 [2024-07-25 00:00:30.419468] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1ec18e0 was disconnected and freed. delete nvme_qpair. 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:00.032 00:00:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:00.963 00:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:00.964 00:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.964 00:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:00.964 00:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:00.964 00:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:00.964 00:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:00.964 00:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:00.964 00:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:01.221 00:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:01.221 00:00:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:02.151 00:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:02.151 00:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.151 00:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:02.151 00:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:02.151 00:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:02.151 00:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:02.151 00:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:02.151 00:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:02.151 00:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:02.151 00:00:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:03.080 00:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:03.080 00:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:03.080 00:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:03.080 00:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:03.080 00:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:03.080 00:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:03.080 00:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:03.080 00:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:03.080 00:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:03.080 00:00:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:04.451 00:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:04.451 00:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.451 00:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:04.451 00:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:04.451 00:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:04.451 00:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:04.451 00:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:04.451 00:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:04.451 00:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:04.451 00:00:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:05.384 00:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:05.384 00:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.384 00:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:05.384 00:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:05.384 00:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:05.384 00:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:05.384 00:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:05.384 00:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:05.384 00:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:05.384 00:00:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:05.384 [2024-07-25 00:00:35.853351] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:05.384 [2024-07-25 00:00:35.853429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.384 [2024-07-25 00:00:35.853449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.384 [2024-07-25 00:00:35.853466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.384 [2024-07-25 00:00:35.853479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.384 [2024-07-25 00:00:35.853494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.384 [2024-07-25 00:00:35.853513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.384 [2024-07-25 00:00:35.853545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.384 [2024-07-25 00:00:35.853559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.384 [2024-07-25 00:00:35.853576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:05.384 [2024-07-25 00:00:35.853590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:05.384 [2024-07-25 00:00:35.853605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e88320 is same with the state(5) to be set 00:23:05.384 [2024-07-25 00:00:35.863372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e88320 (9): Bad file descriptor 00:23:05.384 [2024-07-25 00:00:35.873424] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:06.316 00:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:06.316 00:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.316 00:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:06.316 00:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.316 00:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:06.316 00:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:06.316 00:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:06.316 [2024-07-25 00:00:36.894283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:06.316 [2024-07-25 00:00:36.894346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e88320 with addr=10.0.0.2, port=4420 00:23:06.316 [2024-07-25 00:00:36.894372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e88320 is same with the state(5) to be set 00:23:06.316 [2024-07-25 00:00:36.894418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e88320 (9): Bad file descriptor 00:23:06.316 [2024-07-25 00:00:36.894885] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:06.316 [2024-07-25 00:00:36.894934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:06.316 [2024-07-25 00:00:36.894954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:06.316 [2024-07-25 00:00:36.894972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:06.316 [2024-07-25 00:00:36.895006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:06.316 [2024-07-25 00:00:36.895025] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:06.316 00:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.316 00:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:06.316 00:00:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:07.701 [2024-07-25 00:00:37.897527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:07.701 [2024-07-25 00:00:37.897569] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:07.701 [2024-07-25 00:00:37.897586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:07.701 [2024-07-25 00:00:37.897612] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:23:07.701 [2024-07-25 00:00:37.897635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.701 [2024-07-25 00:00:37.897678] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:07.701 [2024-07-25 00:00:37.897718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.701 [2024-07-25 00:00:37.897741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.701 [2024-07-25 00:00:37.897761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.701 [2024-07-25 00:00:37.897776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.701 [2024-07-25 00:00:37.897791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.701 [2024-07-25 00:00:37.897806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.701 [2024-07-25 00:00:37.897822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.701 [2024-07-25 00:00:37.897836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.701 [2024-07-25 00:00:37.897852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:07.701 [2024-07-25 00:00:37.897867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.701 [2024-07-25 00:00:37.897882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:07.701 [2024-07-25 00:00:37.898053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e87780 (9): Bad file descriptor 00:23:07.701 [2024-07-25 00:00:37.899075] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:07.701 [2024-07-25 00:00:37.899100] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:07.701 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:07.702 00:00:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:07.702 00:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:07.702 00:00:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:08.668 00:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:08.668 00:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.668 00:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:08.668 00:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.668 00:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:08.668 00:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:08.668 00:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:08.668 00:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.668 00:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:08.668 00:00:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:09.600 [2024-07-25 00:00:39.957429] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:09.600 [2024-07-25 00:00:39.957458] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:09.600 [2024-07-25 00:00:39.957482] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:09.600 [2024-07-25 00:00:40.043832] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:09.600 00:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:09.600 00:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.600 00:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:09.600 00:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:09.600 00:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:09.600 00:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:09.600 00:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:09.600 00:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:09.600 [2024-07-25 00:00:40.107700] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:09.600 [2024-07-25 00:00:40.107754] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:09.600 [2024-07-25 00:00:40.107792] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:09.600 [2024-07-25 00:00:40.107819] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:09.600 [2024-07-25 00:00:40.107847] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:09.600 [2024-07-25 00:00:40.114782] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1e8e120 was disconnected and freed. delete nvme_qpair. 00:23:09.600 00:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:09.600 00:00:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:10.531 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:10.531 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.531 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:10.531 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:10.531 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:10.531 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:10.531 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:10.531 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:10.789 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:10.789 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:10.789 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 3454980 00:23:10.789 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3454980 ']' 00:23:10.789 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3454980 00:23:10.789 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:10.789 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:10.789 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3454980 00:23:10.789 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:10.789 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:10.789 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3454980' 00:23:10.789 killing process with pid 3454980 00:23:10.789 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3454980 00:23:10.789 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3454980 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:11.047 rmmod nvme_tcp 00:23:11.047 rmmod nvme_fabrics 00:23:11.047 rmmod nvme_keyring 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 3454835 ']' 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 3454835 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 3454835 ']' 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 3454835 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3454835 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3454835' 00:23:11.047 killing process with pid 3454835 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 3454835 00:23:11.047 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 3454835 00:23:11.305 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:11.305 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:11.305 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:11.305 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:11.305 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:11.305 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.305 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.305 00:00:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:13.837 00:23:13.837 real 0m17.702s 00:23:13.837 user 0m25.540s 00:23:13.837 sys 0m3.058s 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:13.837 ************************************ 00:23:13.837 END TEST nvmf_discovery_remove_ifc 00:23:13.837 ************************************ 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.837 ************************************ 00:23:13.837 START TEST nvmf_identify_kernel_target 00:23:13.837 ************************************ 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:13.837 * Looking for test storage... 00:23:13.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.837 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:23:13.838 00:00:43 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.214 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:15.215 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:15.215 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:15.215 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:15.215 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.215 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:15.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:23:15.472 00:23:15.472 --- 10.0.0.2 ping statistics --- 00:23:15.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.472 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:23:15.472 00:23:15.472 --- 10.0.0.1 ping statistics --- 00:23:15.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.472 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:15.472 00:00:45 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:16.404 Waiting for block devices as requested 00:23:16.661 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:16.661 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:16.919 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:16.919 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:16.919 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:16.919 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:17.176 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:17.176 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:17.176 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:17.176 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:17.441 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:17.441 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:17.441 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:17.441 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:17.697 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:17.697 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:17.697 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:17.955 No valid GPT data, bailing 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:17.955 00:23:17.955 Discovery Log Number of Records 2, Generation counter 2 00:23:17.955 =====Discovery Log Entry 0====== 00:23:17.955 trtype: tcp 00:23:17.955 adrfam: ipv4 00:23:17.955 subtype: current discovery subsystem 00:23:17.955 treq: not specified, sq flow control disable supported 00:23:17.955 portid: 1 00:23:17.955 trsvcid: 4420 00:23:17.955 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:17.955 traddr: 10.0.0.1 00:23:17.955 eflags: none 00:23:17.955 sectype: none 00:23:17.955 =====Discovery Log Entry 1====== 00:23:17.955 trtype: tcp 00:23:17.955 adrfam: ipv4 00:23:17.955 subtype: nvme subsystem 00:23:17.955 treq: not specified, sq flow control disable supported 00:23:17.955 portid: 1 00:23:17.955 trsvcid: 4420 00:23:17.955 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:17.955 traddr: 10.0.0.1 00:23:17.955 eflags: none 00:23:17.955 sectype: none 00:23:17.955 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:17.955 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:17.955 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.213 ===================================================== 00:23:18.213 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:18.213 ===================================================== 00:23:18.213 Controller Capabilities/Features 00:23:18.213 ================================ 00:23:18.213 Vendor ID: 0000 00:23:18.213 Subsystem Vendor ID: 0000 00:23:18.213 Serial Number: 2e261f014365141e72be 00:23:18.213 Model Number: Linux 00:23:18.213 Firmware Version: 6.7.0-68 00:23:18.213 Recommended Arb Burst: 0 00:23:18.213 IEEE OUI Identifier: 00 00 00 00:23:18.213 Multi-path I/O 00:23:18.213 May have multiple subsystem ports: No 00:23:18.213 May have multiple controllers: No 00:23:18.213 Associated with SR-IOV VF: No 00:23:18.213 Max Data Transfer Size: Unlimited 00:23:18.213 Max Number of Namespaces: 0 00:23:18.213 Max Number of I/O Queues: 1024 00:23:18.213 NVMe Specification Version (VS): 1.3 00:23:18.213 NVMe Specification Version (Identify): 1.3 00:23:18.213 Maximum Queue Entries: 1024 00:23:18.213 Contiguous Queues Required: No 00:23:18.213 Arbitration Mechanisms Supported 00:23:18.213 Weighted Round Robin: Not Supported 00:23:18.213 Vendor Specific: Not Supported 00:23:18.213 Reset Timeout: 7500 ms 00:23:18.213 Doorbell Stride: 4 bytes 00:23:18.213 NVM Subsystem Reset: Not Supported 00:23:18.213 Command Sets Supported 00:23:18.213 NVM Command Set: Supported 00:23:18.213 Boot Partition: Not Supported 00:23:18.213 Memory Page Size Minimum: 4096 bytes 00:23:18.213 Memory Page Size Maximum: 4096 bytes 00:23:18.213 Persistent Memory Region: Not Supported 00:23:18.213 Optional Asynchronous Events Supported 00:23:18.213 Namespace Attribute Notices: Not Supported 00:23:18.213 Firmware Activation Notices: Not Supported 00:23:18.213 ANA Change Notices: Not Supported 00:23:18.213 PLE Aggregate Log Change Notices: Not Supported 00:23:18.213 LBA Status Info Alert Notices: Not Supported 00:23:18.213 EGE Aggregate Log Change Notices: Not Supported 00:23:18.213 Normal NVM Subsystem Shutdown event: Not Supported 00:23:18.213 Zone Descriptor Change Notices: Not Supported 00:23:18.213 Discovery Log Change Notices: Supported 00:23:18.213 Controller Attributes 00:23:18.213 128-bit Host Identifier: Not Supported 00:23:18.213 Non-Operational Permissive Mode: Not Supported 00:23:18.213 NVM Sets: Not Supported 00:23:18.213 Read Recovery Levels: Not Supported 00:23:18.213 Endurance Groups: Not Supported 00:23:18.214 Predictable Latency Mode: Not Supported 00:23:18.214 Traffic Based Keep ALive: Not Supported 00:23:18.214 Namespace Granularity: Not Supported 00:23:18.214 SQ Associations: Not Supported 00:23:18.214 UUID List: Not Supported 00:23:18.214 Multi-Domain Subsystem: Not Supported 00:23:18.214 Fixed Capacity Management: Not Supported 00:23:18.214 Variable Capacity Management: Not Supported 00:23:18.214 Delete Endurance Group: Not Supported 00:23:18.214 Delete NVM Set: Not Supported 00:23:18.214 Extended LBA Formats Supported: Not Supported 00:23:18.214 Flexible Data Placement Supported: Not Supported 00:23:18.214 00:23:18.214 Controller Memory Buffer Support 00:23:18.214 ================================ 00:23:18.214 Supported: No 00:23:18.214 00:23:18.214 Persistent Memory Region Support 00:23:18.214 ================================ 00:23:18.214 Supported: No 00:23:18.214 00:23:18.214 Admin Command Set Attributes 00:23:18.214 ============================ 00:23:18.214 Security Send/Receive: Not Supported 00:23:18.214 Format NVM: Not Supported 00:23:18.214 Firmware Activate/Download: Not Supported 00:23:18.214 Namespace Management: Not Supported 00:23:18.214 Device Self-Test: Not Supported 00:23:18.214 Directives: Not Supported 00:23:18.214 NVMe-MI: Not Supported 00:23:18.214 Virtualization Management: Not Supported 00:23:18.214 Doorbell Buffer Config: Not Supported 00:23:18.214 Get LBA Status Capability: Not Supported 00:23:18.214 Command & Feature Lockdown Capability: Not Supported 00:23:18.214 Abort Command Limit: 1 00:23:18.214 Async Event Request Limit: 1 00:23:18.214 Number of Firmware Slots: N/A 00:23:18.214 Firmware Slot 1 Read-Only: N/A 00:23:18.214 Firmware Activation Without Reset: N/A 00:23:18.214 Multiple Update Detection Support: N/A 00:23:18.214 Firmware Update Granularity: No Information Provided 00:23:18.214 Per-Namespace SMART Log: No 00:23:18.214 Asymmetric Namespace Access Log Page: Not Supported 00:23:18.214 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:18.214 Command Effects Log Page: Not Supported 00:23:18.214 Get Log Page Extended Data: Supported 00:23:18.214 Telemetry Log Pages: Not Supported 00:23:18.214 Persistent Event Log Pages: Not Supported 00:23:18.214 Supported Log Pages Log Page: May Support 00:23:18.214 Commands Supported & Effects Log Page: Not Supported 00:23:18.214 Feature Identifiers & Effects Log Page:May Support 00:23:18.214 NVMe-MI Commands & Effects Log Page: May Support 00:23:18.214 Data Area 4 for Telemetry Log: Not Supported 00:23:18.214 Error Log Page Entries Supported: 1 00:23:18.214 Keep Alive: Not Supported 00:23:18.214 00:23:18.214 NVM Command Set Attributes 00:23:18.214 ========================== 00:23:18.214 Submission Queue Entry Size 00:23:18.214 Max: 1 00:23:18.214 Min: 1 00:23:18.214 Completion Queue Entry Size 00:23:18.214 Max: 1 00:23:18.214 Min: 1 00:23:18.214 Number of Namespaces: 0 00:23:18.214 Compare Command: Not Supported 00:23:18.214 Write Uncorrectable Command: Not Supported 00:23:18.214 Dataset Management Command: Not Supported 00:23:18.214 Write Zeroes Command: Not Supported 00:23:18.214 Set Features Save Field: Not Supported 00:23:18.214 Reservations: Not Supported 00:23:18.214 Timestamp: Not Supported 00:23:18.214 Copy: Not Supported 00:23:18.214 Volatile Write Cache: Not Present 00:23:18.214 Atomic Write Unit (Normal): 1 00:23:18.214 Atomic Write Unit (PFail): 1 00:23:18.214 Atomic Compare & Write Unit: 1 00:23:18.214 Fused Compare & Write: Not Supported 00:23:18.214 Scatter-Gather List 00:23:18.214 SGL Command Set: Supported 00:23:18.214 SGL Keyed: Not Supported 00:23:18.214 SGL Bit Bucket Descriptor: Not Supported 00:23:18.214 SGL Metadata Pointer: Not Supported 00:23:18.214 Oversized SGL: Not Supported 00:23:18.214 SGL Metadata Address: Not Supported 00:23:18.214 SGL Offset: Supported 00:23:18.214 Transport SGL Data Block: Not Supported 00:23:18.214 Replay Protected Memory Block: Not Supported 00:23:18.214 00:23:18.214 Firmware Slot Information 00:23:18.214 ========================= 00:23:18.214 Active slot: 0 00:23:18.214 00:23:18.214 00:23:18.214 Error Log 00:23:18.214 ========= 00:23:18.214 00:23:18.214 Active Namespaces 00:23:18.214 ================= 00:23:18.214 Discovery Log Page 00:23:18.214 ================== 00:23:18.214 Generation Counter: 2 00:23:18.214 Number of Records: 2 00:23:18.214 Record Format: 0 00:23:18.214 00:23:18.214 Discovery Log Entry 0 00:23:18.214 ---------------------- 00:23:18.214 Transport Type: 3 (TCP) 00:23:18.214 Address Family: 1 (IPv4) 00:23:18.214 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:18.214 Entry Flags: 00:23:18.214 Duplicate Returned Information: 0 00:23:18.214 Explicit Persistent Connection Support for Discovery: 0 00:23:18.214 Transport Requirements: 00:23:18.214 Secure Channel: Not Specified 00:23:18.214 Port ID: 1 (0x0001) 00:23:18.214 Controller ID: 65535 (0xffff) 00:23:18.214 Admin Max SQ Size: 32 00:23:18.214 Transport Service Identifier: 4420 00:23:18.214 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:18.214 Transport Address: 10.0.0.1 00:23:18.214 Discovery Log Entry 1 00:23:18.214 ---------------------- 00:23:18.214 Transport Type: 3 (TCP) 00:23:18.214 Address Family: 1 (IPv4) 00:23:18.214 Subsystem Type: 2 (NVM Subsystem) 00:23:18.214 Entry Flags: 00:23:18.214 Duplicate Returned Information: 0 00:23:18.214 Explicit Persistent Connection Support for Discovery: 0 00:23:18.214 Transport Requirements: 00:23:18.214 Secure Channel: Not Specified 00:23:18.214 Port ID: 1 (0x0001) 00:23:18.214 Controller ID: 65535 (0xffff) 00:23:18.214 Admin Max SQ Size: 32 00:23:18.214 Transport Service Identifier: 4420 00:23:18.214 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:18.214 Transport Address: 10.0.0.1 00:23:18.214 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:18.214 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.214 get_feature(0x01) failed 00:23:18.214 get_feature(0x02) failed 00:23:18.214 get_feature(0x04) failed 00:23:18.214 ===================================================== 00:23:18.214 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:18.214 ===================================================== 00:23:18.214 Controller Capabilities/Features 00:23:18.214 ================================ 00:23:18.214 Vendor ID: 0000 00:23:18.214 Subsystem Vendor ID: 0000 00:23:18.214 Serial Number: f48268810af49a83a971 00:23:18.214 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:18.214 Firmware Version: 6.7.0-68 00:23:18.214 Recommended Arb Burst: 6 00:23:18.214 IEEE OUI Identifier: 00 00 00 00:23:18.214 Multi-path I/O 00:23:18.214 May have multiple subsystem ports: Yes 00:23:18.214 May have multiple controllers: Yes 00:23:18.214 Associated with SR-IOV VF: No 00:23:18.214 Max Data Transfer Size: Unlimited 00:23:18.214 Max Number of Namespaces: 1024 00:23:18.214 Max Number of I/O Queues: 128 00:23:18.214 NVMe Specification Version (VS): 1.3 00:23:18.214 NVMe Specification Version (Identify): 1.3 00:23:18.214 Maximum Queue Entries: 1024 00:23:18.214 Contiguous Queues Required: No 00:23:18.214 Arbitration Mechanisms Supported 00:23:18.214 Weighted Round Robin: Not Supported 00:23:18.214 Vendor Specific: Not Supported 00:23:18.214 Reset Timeout: 7500 ms 00:23:18.214 Doorbell Stride: 4 bytes 00:23:18.214 NVM Subsystem Reset: Not Supported 00:23:18.214 Command Sets Supported 00:23:18.214 NVM Command Set: Supported 00:23:18.214 Boot Partition: Not Supported 00:23:18.214 Memory Page Size Minimum: 4096 bytes 00:23:18.214 Memory Page Size Maximum: 4096 bytes 00:23:18.214 Persistent Memory Region: Not Supported 00:23:18.214 Optional Asynchronous Events Supported 00:23:18.214 Namespace Attribute Notices: Supported 00:23:18.214 Firmware Activation Notices: Not Supported 00:23:18.214 ANA Change Notices: Supported 00:23:18.214 PLE Aggregate Log Change Notices: Not Supported 00:23:18.214 LBA Status Info Alert Notices: Not Supported 00:23:18.214 EGE Aggregate Log Change Notices: Not Supported 00:23:18.214 Normal NVM Subsystem Shutdown event: Not Supported 00:23:18.214 Zone Descriptor Change Notices: Not Supported 00:23:18.214 Discovery Log Change Notices: Not Supported 00:23:18.214 Controller Attributes 00:23:18.214 128-bit Host Identifier: Supported 00:23:18.214 Non-Operational Permissive Mode: Not Supported 00:23:18.214 NVM Sets: Not Supported 00:23:18.214 Read Recovery Levels: Not Supported 00:23:18.215 Endurance Groups: Not Supported 00:23:18.215 Predictable Latency Mode: Not Supported 00:23:18.215 Traffic Based Keep ALive: Supported 00:23:18.215 Namespace Granularity: Not Supported 00:23:18.215 SQ Associations: Not Supported 00:23:18.215 UUID List: Not Supported 00:23:18.215 Multi-Domain Subsystem: Not Supported 00:23:18.215 Fixed Capacity Management: Not Supported 00:23:18.215 Variable Capacity Management: Not Supported 00:23:18.215 Delete Endurance Group: Not Supported 00:23:18.215 Delete NVM Set: Not Supported 00:23:18.215 Extended LBA Formats Supported: Not Supported 00:23:18.215 Flexible Data Placement Supported: Not Supported 00:23:18.215 00:23:18.215 Controller Memory Buffer Support 00:23:18.215 ================================ 00:23:18.215 Supported: No 00:23:18.215 00:23:18.215 Persistent Memory Region Support 00:23:18.215 ================================ 00:23:18.215 Supported: No 00:23:18.215 00:23:18.215 Admin Command Set Attributes 00:23:18.215 ============================ 00:23:18.215 Security Send/Receive: Not Supported 00:23:18.215 Format NVM: Not Supported 00:23:18.215 Firmware Activate/Download: Not Supported 00:23:18.215 Namespace Management: Not Supported 00:23:18.215 Device Self-Test: Not Supported 00:23:18.215 Directives: Not Supported 00:23:18.215 NVMe-MI: Not Supported 00:23:18.215 Virtualization Management: Not Supported 00:23:18.215 Doorbell Buffer Config: Not Supported 00:23:18.215 Get LBA Status Capability: Not Supported 00:23:18.215 Command & Feature Lockdown Capability: Not Supported 00:23:18.215 Abort Command Limit: 4 00:23:18.215 Async Event Request Limit: 4 00:23:18.215 Number of Firmware Slots: N/A 00:23:18.215 Firmware Slot 1 Read-Only: N/A 00:23:18.215 Firmware Activation Without Reset: N/A 00:23:18.215 Multiple Update Detection Support: N/A 00:23:18.215 Firmware Update Granularity: No Information Provided 00:23:18.215 Per-Namespace SMART Log: Yes 00:23:18.215 Asymmetric Namespace Access Log Page: Supported 00:23:18.215 ANA Transition Time : 10 sec 00:23:18.215 00:23:18.215 Asymmetric Namespace Access Capabilities 00:23:18.215 ANA Optimized State : Supported 00:23:18.215 ANA Non-Optimized State : Supported 00:23:18.215 ANA Inaccessible State : Supported 00:23:18.215 ANA Persistent Loss State : Supported 00:23:18.215 ANA Change State : Supported 00:23:18.215 ANAGRPID is not changed : No 00:23:18.215 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:18.215 00:23:18.215 ANA Group Identifier Maximum : 128 00:23:18.215 Number of ANA Group Identifiers : 128 00:23:18.215 Max Number of Allowed Namespaces : 1024 00:23:18.215 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:18.215 Command Effects Log Page: Supported 00:23:18.215 Get Log Page Extended Data: Supported 00:23:18.215 Telemetry Log Pages: Not Supported 00:23:18.215 Persistent Event Log Pages: Not Supported 00:23:18.215 Supported Log Pages Log Page: May Support 00:23:18.215 Commands Supported & Effects Log Page: Not Supported 00:23:18.215 Feature Identifiers & Effects Log Page:May Support 00:23:18.215 NVMe-MI Commands & Effects Log Page: May Support 00:23:18.215 Data Area 4 for Telemetry Log: Not Supported 00:23:18.215 Error Log Page Entries Supported: 128 00:23:18.215 Keep Alive: Supported 00:23:18.215 Keep Alive Granularity: 1000 ms 00:23:18.215 00:23:18.215 NVM Command Set Attributes 00:23:18.215 ========================== 00:23:18.215 Submission Queue Entry Size 00:23:18.215 Max: 64 00:23:18.215 Min: 64 00:23:18.215 Completion Queue Entry Size 00:23:18.215 Max: 16 00:23:18.215 Min: 16 00:23:18.215 Number of Namespaces: 1024 00:23:18.215 Compare Command: Not Supported 00:23:18.215 Write Uncorrectable Command: Not Supported 00:23:18.215 Dataset Management Command: Supported 00:23:18.215 Write Zeroes Command: Supported 00:23:18.215 Set Features Save Field: Not Supported 00:23:18.215 Reservations: Not Supported 00:23:18.215 Timestamp: Not Supported 00:23:18.215 Copy: Not Supported 00:23:18.215 Volatile Write Cache: Present 00:23:18.215 Atomic Write Unit (Normal): 1 00:23:18.215 Atomic Write Unit (PFail): 1 00:23:18.215 Atomic Compare & Write Unit: 1 00:23:18.215 Fused Compare & Write: Not Supported 00:23:18.215 Scatter-Gather List 00:23:18.215 SGL Command Set: Supported 00:23:18.215 SGL Keyed: Not Supported 00:23:18.215 SGL Bit Bucket Descriptor: Not Supported 00:23:18.215 SGL Metadata Pointer: Not Supported 00:23:18.215 Oversized SGL: Not Supported 00:23:18.215 SGL Metadata Address: Not Supported 00:23:18.215 SGL Offset: Supported 00:23:18.215 Transport SGL Data Block: Not Supported 00:23:18.215 Replay Protected Memory Block: Not Supported 00:23:18.215 00:23:18.215 Firmware Slot Information 00:23:18.215 ========================= 00:23:18.215 Active slot: 0 00:23:18.215 00:23:18.215 Asymmetric Namespace Access 00:23:18.215 =========================== 00:23:18.215 Change Count : 0 00:23:18.215 Number of ANA Group Descriptors : 1 00:23:18.215 ANA Group Descriptor : 0 00:23:18.215 ANA Group ID : 1 00:23:18.215 Number of NSID Values : 1 00:23:18.215 Change Count : 0 00:23:18.215 ANA State : 1 00:23:18.215 Namespace Identifier : 1 00:23:18.215 00:23:18.215 Commands Supported and Effects 00:23:18.215 ============================== 00:23:18.215 Admin Commands 00:23:18.215 -------------- 00:23:18.215 Get Log Page (02h): Supported 00:23:18.215 Identify (06h): Supported 00:23:18.215 Abort (08h): Supported 00:23:18.215 Set Features (09h): Supported 00:23:18.215 Get Features (0Ah): Supported 00:23:18.215 Asynchronous Event Request (0Ch): Supported 00:23:18.215 Keep Alive (18h): Supported 00:23:18.215 I/O Commands 00:23:18.215 ------------ 00:23:18.215 Flush (00h): Supported 00:23:18.215 Write (01h): Supported LBA-Change 00:23:18.215 Read (02h): Supported 00:23:18.215 Write Zeroes (08h): Supported LBA-Change 00:23:18.215 Dataset Management (09h): Supported 00:23:18.215 00:23:18.215 Error Log 00:23:18.215 ========= 00:23:18.215 Entry: 0 00:23:18.215 Error Count: 0x3 00:23:18.215 Submission Queue Id: 0x0 00:23:18.215 Command Id: 0x5 00:23:18.215 Phase Bit: 0 00:23:18.215 Status Code: 0x2 00:23:18.215 Status Code Type: 0x0 00:23:18.215 Do Not Retry: 1 00:23:18.215 Error Location: 0x28 00:23:18.215 LBA: 0x0 00:23:18.215 Namespace: 0x0 00:23:18.215 Vendor Log Page: 0x0 00:23:18.215 ----------- 00:23:18.215 Entry: 1 00:23:18.215 Error Count: 0x2 00:23:18.215 Submission Queue Id: 0x0 00:23:18.215 Command Id: 0x5 00:23:18.215 Phase Bit: 0 00:23:18.215 Status Code: 0x2 00:23:18.215 Status Code Type: 0x0 00:23:18.215 Do Not Retry: 1 00:23:18.215 Error Location: 0x28 00:23:18.215 LBA: 0x0 00:23:18.215 Namespace: 0x0 00:23:18.215 Vendor Log Page: 0x0 00:23:18.215 ----------- 00:23:18.215 Entry: 2 00:23:18.215 Error Count: 0x1 00:23:18.215 Submission Queue Id: 0x0 00:23:18.215 Command Id: 0x4 00:23:18.215 Phase Bit: 0 00:23:18.215 Status Code: 0x2 00:23:18.215 Status Code Type: 0x0 00:23:18.215 Do Not Retry: 1 00:23:18.215 Error Location: 0x28 00:23:18.215 LBA: 0x0 00:23:18.215 Namespace: 0x0 00:23:18.215 Vendor Log Page: 0x0 00:23:18.215 00:23:18.215 Number of Queues 00:23:18.215 ================ 00:23:18.215 Number of I/O Submission Queues: 128 00:23:18.215 Number of I/O Completion Queues: 128 00:23:18.215 00:23:18.215 ZNS Specific Controller Data 00:23:18.215 ============================ 00:23:18.215 Zone Append Size Limit: 0 00:23:18.215 00:23:18.215 00:23:18.215 Active Namespaces 00:23:18.215 ================= 00:23:18.215 get_feature(0x05) failed 00:23:18.215 Namespace ID:1 00:23:18.215 Command Set Identifier: NVM (00h) 00:23:18.215 Deallocate: Supported 00:23:18.215 Deallocated/Unwritten Error: Not Supported 00:23:18.215 Deallocated Read Value: Unknown 00:23:18.215 Deallocate in Write Zeroes: Not Supported 00:23:18.215 Deallocated Guard Field: 0xFFFF 00:23:18.215 Flush: Supported 00:23:18.215 Reservation: Not Supported 00:23:18.215 Namespace Sharing Capabilities: Multiple Controllers 00:23:18.215 Size (in LBAs): 1953525168 (931GiB) 00:23:18.215 Capacity (in LBAs): 1953525168 (931GiB) 00:23:18.215 Utilization (in LBAs): 1953525168 (931GiB) 00:23:18.215 UUID: 00b236ea-079a-4019-b86f-cbb6a61aaef8 00:23:18.215 Thin Provisioning: Not Supported 00:23:18.215 Per-NS Atomic Units: Yes 00:23:18.215 Atomic Boundary Size (Normal): 0 00:23:18.216 Atomic Boundary Size (PFail): 0 00:23:18.216 Atomic Boundary Offset: 0 00:23:18.216 NGUID/EUI64 Never Reused: No 00:23:18.216 ANA group ID: 1 00:23:18.216 Namespace Write Protected: No 00:23:18.216 Number of LBA Formats: 1 00:23:18.216 Current LBA Format: LBA Format #00 00:23:18.216 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:18.216 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:18.216 rmmod nvme_tcp 00:23:18.216 rmmod nvme_fabrics 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:18.216 00:00:48 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.751 00:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:20.751 00:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:20.751 00:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:20.751 00:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:20.751 00:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:20.751 00:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:20.751 00:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:20.751 00:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:20.751 00:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:20.751 00:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:20.751 00:00:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:21.314 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:21.571 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:21.571 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:21.571 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:21.571 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:21.571 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:21.571 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:21.571 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:21.571 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:21.571 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:21.571 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:21.571 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:21.571 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:21.571 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:21.571 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:21.571 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:22.503 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:22.760 00:23:22.760 real 0m9.251s 00:23:22.760 user 0m1.876s 00:23:22.760 sys 0m3.312s 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.760 ************************************ 00:23:22.760 END TEST nvmf_identify_kernel_target 00:23:22.760 ************************************ 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.760 ************************************ 00:23:22.760 START TEST nvmf_auth_host 00:23:22.760 ************************************ 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:22.760 * Looking for test storage... 00:23:22.760 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.760 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.761 00:00:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:24.658 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:24.658 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:24.658 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:24.658 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.658 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:24.659 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:24.659 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:24.659 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:24.659 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:24.659 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.659 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.659 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.659 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:24.659 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.659 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.659 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:24.659 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.659 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.659 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:24.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:23:24.916 00:23:24.916 --- 10.0.0.2 ping statistics --- 00:23:24.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.916 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:23:24.916 00:23:24.916 --- 10.0.0.1 ping statistics --- 00:23:24.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.916 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=3462066 00:23:24.916 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:24.917 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 3462066 00:23:24.917 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3462066 ']' 00:23:24.917 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.917 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.917 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.917 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.917 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.174 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.174 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:25.174 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:25.174 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:25.174 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.174 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:25.174 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bd4112261d74cc81203c0c628d7d3d93 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.h3D 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bd4112261d74cc81203c0c628d7d3d93 0 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bd4112261d74cc81203c0c628d7d3d93 0 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bd4112261d74cc81203c0c628d7d3d93 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.h3D 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.h3D 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.h3D 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6b105de30440db59b7087a48c8882abb6dc9dd55933dbfafd768a7cd4bbf5a8c 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ArO 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6b105de30440db59b7087a48c8882abb6dc9dd55933dbfafd768a7cd4bbf5a8c 3 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6b105de30440db59b7087a48c8882abb6dc9dd55933dbfafd768a7cd4bbf5a8c 3 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6b105de30440db59b7087a48c8882abb6dc9dd55933dbfafd768a7cd4bbf5a8c 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ArO 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ArO 00:23:25.432 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.ArO 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4cb91c72aace118fd20323e1504e072a7b6c2484412765f3 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.eo5 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4cb91c72aace118fd20323e1504e072a7b6c2484412765f3 0 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4cb91c72aace118fd20323e1504e072a7b6c2484412765f3 0 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4cb91c72aace118fd20323e1504e072a7b6c2484412765f3 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.eo5 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.eo5 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.eo5 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=427aa09458637949855776dbc491a7948d546e45afb0b26d 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.C7T 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 427aa09458637949855776dbc491a7948d546e45afb0b26d 2 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 427aa09458637949855776dbc491a7948d546e45afb0b26d 2 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=427aa09458637949855776dbc491a7948d546e45afb0b26d 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.C7T 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.C7T 00:23:25.433 00:00:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.C7T 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f813050bfbbe314e4d05c797950628de 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.1BG 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f813050bfbbe314e4d05c797950628de 1 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f813050bfbbe314e4d05c797950628de 1 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f813050bfbbe314e4d05c797950628de 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:25.433 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.1BG 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.1BG 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.1BG 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d3bd2d68cadeb91e4568327bcd50197f 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.p5S 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d3bd2d68cadeb91e4568327bcd50197f 1 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d3bd2d68cadeb91e4568327bcd50197f 1 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d3bd2d68cadeb91e4568327bcd50197f 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.p5S 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.p5S 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.p5S 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5e87fba0ac120f2ed788184ffac70ee6369c59b49c9dbdf9 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.CGL 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5e87fba0ac120f2ed788184ffac70ee6369c59b49c9dbdf9 2 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5e87fba0ac120f2ed788184ffac70ee6369c59b49c9dbdf9 2 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5e87fba0ac120f2ed788184ffac70ee6369c59b49c9dbdf9 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.CGL 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.CGL 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.CGL 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=54a828db5169496f4e17f490da0664a7 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.i9A 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 54a828db5169496f4e17f490da0664a7 0 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 54a828db5169496f4e17f490da0664a7 0 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=54a828db5169496f4e17f490da0664a7 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.i9A 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.i9A 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.i9A 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3846f426d7b40ad7a0d39079aa302e1b600a9c2bbeb2acd6ae1d5ce8be24915b 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.S8H 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3846f426d7b40ad7a0d39079aa302e1b600a9c2bbeb2acd6ae1d5ce8be24915b 3 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3846f426d7b40ad7a0d39079aa302e1b600a9c2bbeb2acd6ae1d5ce8be24915b 3 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3846f426d7b40ad7a0d39079aa302e1b600a9c2bbeb2acd6ae1d5ce8be24915b 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.S8H 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.S8H 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.S8H 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3462066 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 3462066 ']' 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:25.691 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:25.692 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:25.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:25.692 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:25.692 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.h3D 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.ArO ]] 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.ArO 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.eo5 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.C7T ]] 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.C7T 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.1BG 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.p5S ]] 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.p5S 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:25.949 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.CGL 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.i9A ]] 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.i9A 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.S8H 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:26.207 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:26.208 00:00:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:27.143 Waiting for block devices as requested 00:23:27.400 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:27.400 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:27.657 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:27.657 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:27.657 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:27.945 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:27.945 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:27.945 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:28.203 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:28.203 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:28.203 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:28.203 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:28.460 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:28.460 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:28.460 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:28.460 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:28.460 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:29.026 No valid GPT data, bailing 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:29.026 00:23:29.026 Discovery Log Number of Records 2, Generation counter 2 00:23:29.026 =====Discovery Log Entry 0====== 00:23:29.026 trtype: tcp 00:23:29.026 adrfam: ipv4 00:23:29.026 subtype: current discovery subsystem 00:23:29.026 treq: not specified, sq flow control disable supported 00:23:29.026 portid: 1 00:23:29.026 trsvcid: 4420 00:23:29.026 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:29.026 traddr: 10.0.0.1 00:23:29.026 eflags: none 00:23:29.026 sectype: none 00:23:29.026 =====Discovery Log Entry 1====== 00:23:29.026 trtype: tcp 00:23:29.026 adrfam: ipv4 00:23:29.026 subtype: nvme subsystem 00:23:29.026 treq: not specified, sq flow control disable supported 00:23:29.026 portid: 1 00:23:29.026 trsvcid: 4420 00:23:29.026 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:29.026 traddr: 10.0.0.1 00:23:29.026 eflags: none 00:23:29.026 sectype: none 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:29.026 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.027 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.285 nvme0n1 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.285 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.286 00:00:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.543 nvme0n1 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.543 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.544 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.801 nvme0n1 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:29.801 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:29.802 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.059 nvme0n1 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:30.059 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.060 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.317 nvme0n1 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:30.317 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.318 nvme0n1 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.318 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.575 00:01:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.575 nvme0n1 00:23:30.575 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.575 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.575 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.575 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.575 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.833 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.834 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.834 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.834 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.834 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.834 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.834 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.834 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:30.834 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:30.834 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.092 nvme0n1 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.092 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.350 nvme0n1 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.350 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.351 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.609 nvme0n1 00:23:31.609 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.609 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.609 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.609 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.609 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.609 00:01:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.609 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.868 nvme0n1 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:31.868 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.869 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.127 nvme0n1 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.127 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.385 nvme0n1 00:23:32.385 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.385 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.385 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.385 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.385 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.385 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.385 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.385 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.385 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.385 00:01:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.643 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.900 nvme0n1 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:32.900 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.157 nvme0n1 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.157 00:01:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.721 nvme0n1 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:33.721 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.286 nvme0n1 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.287 00:01:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.853 nvme0n1 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:34.853 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.419 nvme0n1 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.419 00:01:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.985 nvme0n1 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:35.985 00:01:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.551 nvme0n1 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:36.551 00:01:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.485 nvme0n1 00:23:37.485 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.485 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.485 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.485 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.485 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.743 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:37.744 00:01:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.678 nvme0n1 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:38.678 00:01:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.612 nvme0n1 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:39.612 00:01:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.545 nvme0n1 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.545 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.803 00:01:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.735 nvme0n1 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.735 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.736 nvme0n1 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.736 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.994 nvme0n1 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.994 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.995 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.253 nvme0n1 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.253 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.254 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.512 nvme0n1 00:23:42.512 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.512 00:01:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.512 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.770 nvme0n1 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:42.770 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.771 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.028 nvme0n1 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.028 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.285 nvme0n1 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.285 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.286 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.286 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.286 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.286 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.286 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.286 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:43.286 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.286 00:01:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.544 nvme0n1 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.544 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.802 nvme0n1 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.802 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.803 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.803 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.803 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.803 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.803 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.803 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.803 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.803 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.803 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.803 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:43.803 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.803 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.061 nvme0n1 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.061 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.340 nvme0n1 00:23:44.340 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.340 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.340 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.340 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.340 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.340 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.340 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.340 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.340 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.340 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.618 00:01:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.876 nvme0n1 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.876 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.877 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.877 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.877 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.877 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.877 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.877 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.877 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.877 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.877 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.877 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.877 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:44.877 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.135 nvme0n1 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.135 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.393 nvme0n1 00:23:45.393 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.393 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.393 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.393 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.393 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.393 00:01:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.652 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.910 nvme0n1 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.910 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.476 nvme0n1 00:23:46.476 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.476 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.476 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.476 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.476 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.476 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.476 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.476 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.476 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.476 00:01:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:46.476 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.041 nvme0n1 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.041 00:01:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.606 nvme0n1 00:23:47.606 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.606 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.606 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.606 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.606 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.606 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.606 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.606 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.606 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.606 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:47.863 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:47.864 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.429 nvme0n1 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.429 00:01:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.995 nvme0n1 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:48.995 00:01:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.927 nvme0n1 00:23:49.927 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.927 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.927 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:49.927 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.928 00:01:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.858 nvme0n1 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.858 00:01:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.789 nvme0n1 00:23:51.789 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.789 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.789 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.789 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:51.789 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.047 00:01:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.980 nvme0n1 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.980 00:01:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.914 nvme0n1 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.914 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.173 nvme0n1 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.173 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.431 nvme0n1 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.431 00:01:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.691 nvme0n1 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.691 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.949 nvme0n1 00:23:54.949 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.949 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.949 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.949 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.949 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.949 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.949 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.949 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.949 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.949 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.950 nvme0n1 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.950 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.208 nvme0n1 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.208 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.466 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.467 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:55.467 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.467 00:01:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.467 nvme0n1 00:23:55.467 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.467 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.467 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.467 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.467 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.467 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.725 nvme0n1 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.725 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.983 nvme0n1 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:55.983 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:55.984 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.242 nvme0n1 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.242 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.500 00:01:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.759 nvme0n1 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:56.759 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:56.760 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:56.760 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:56.760 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.760 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.760 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:56.760 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.760 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:56.760 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:56.760 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:56.760 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:56.760 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.760 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.018 nvme0n1 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.018 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.584 nvme0n1 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.584 00:01:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.842 nvme0n1 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.842 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 nvme0n1 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:58.100 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.101 00:01:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.667 nvme0n1 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.667 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.925 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.183 nvme0n1 00:23:59.183 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.183 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.183 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.183 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.183 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:59.441 00:01:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.007 nvme0n1 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.007 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.574 nvme0n1 00:24:00.574 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.574 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.574 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.574 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.574 00:01:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.574 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.140 nvme0n1 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmQ0MTEyMjYxZDc0Y2M4MTIwM2MwYzYyOGQ3ZDNkOTNyAsCX: 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: ]] 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NmIxMDVkZTMwNDQwZGI1OWI3MDg3YTQ4Yzg4ODJhYmI2ZGM5ZGQ1NTkzM2RiZmFmZDc2OGE3Y2Q0YmJmNWE4Y7ti+Ic=: 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:01.140 00:01:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.147 nvme0n1 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.147 00:01:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.079 nvme0n1 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjgxMzA1MGJmYmJlMzE0ZTRkMDVjNzk3OTUwNjI4ZGXvXdw1: 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: ]] 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDNiZDJkNjhjYWRlYjkxZTQ1NjgzMjdiY2Q1MDE5N2Ykvpxh: 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.079 00:01:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.011 nvme0n1 00:24:04.011 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.011 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.011 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.011 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.011 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.011 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.011 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.011 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.011 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.011 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWU4N2ZiYTBhYzEyMGYyZWQ3ODgxODRmZmFjNzBlZTYzNjljNTliNDljOWRiZGY5h+XMlg==: 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: ]] 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTRhODI4ZGI1MTY5NDk2ZjRlMTdmNDkwZGEwNjY0YTfTLIwN: 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:04.269 00:01:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.202 nvme0n1 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:Mzg0NmY0MjZkN2I0MGFkN2EwZDM5MDc5YWEzMDJlMWI2MDBhOWMyYmJlYjJhY2Q2YWUxZDVjZThiZTI0OTE1YgGLI5E=: 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:05.202 00:01:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.134 nvme0n1 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NGNiOTFjNzJhYWNlMTE4ZmQyMDMyM2UxNTA0ZTA3MmE3YjZjMjQ4NDQxMjc2NWYz9QG/Xw==: 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: ]] 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDI3YWEwOTQ1ODYzNzk0OTg1NTc3NmRiYzQ5MWE3OTQ4ZDU0NmU0NWFmYjBiMjZk184rZA==: 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.135 request: 00:24:06.135 { 00:24:06.135 "name": "nvme0", 00:24:06.135 "trtype": "tcp", 00:24:06.135 "traddr": "10.0.0.1", 00:24:06.135 "adrfam": "ipv4", 00:24:06.135 "trsvcid": "4420", 00:24:06.135 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:06.135 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:06.135 "prchk_reftag": false, 00:24:06.135 "prchk_guard": false, 00:24:06.135 "hdgst": false, 00:24:06.135 "ddgst": false, 00:24:06.135 "method": "bdev_nvme_attach_controller", 00:24:06.135 "req_id": 1 00:24:06.135 } 00:24:06.135 Got JSON-RPC error response 00:24:06.135 response: 00:24:06.135 { 00:24:06.135 "code": -5, 00:24:06.135 "message": "Input/output error" 00:24:06.135 } 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:06.135 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.393 request: 00:24:06.393 { 00:24:06.393 "name": "nvme0", 00:24:06.393 "trtype": "tcp", 00:24:06.393 "traddr": "10.0.0.1", 00:24:06.393 "adrfam": "ipv4", 00:24:06.393 "trsvcid": "4420", 00:24:06.393 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:06.393 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:06.393 "prchk_reftag": false, 00:24:06.393 "prchk_guard": false, 00:24:06.393 "hdgst": false, 00:24:06.393 "ddgst": false, 00:24:06.393 "dhchap_key": "key2", 00:24:06.393 "method": "bdev_nvme_attach_controller", 00:24:06.393 "req_id": 1 00:24:06.393 } 00:24:06.393 Got JSON-RPC error response 00:24:06.393 response: 00:24:06.393 { 00:24:06.393 "code": -5, 00:24:06.393 "message": "Input/output error" 00:24:06.393 } 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:24:06.393 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:24:06.394 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:06.394 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:24:06.394 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:06.394 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:06.394 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:06.394 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:06.394 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:06.394 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:06.394 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:06.394 00:01:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.651 request: 00:24:06.651 { 00:24:06.651 "name": "nvme0", 00:24:06.651 "trtype": "tcp", 00:24:06.651 "traddr": "10.0.0.1", 00:24:06.651 "adrfam": "ipv4", 00:24:06.651 "trsvcid": "4420", 00:24:06.651 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:06.651 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:06.651 "prchk_reftag": false, 00:24:06.651 "prchk_guard": false, 00:24:06.651 "hdgst": false, 00:24:06.651 "ddgst": false, 00:24:06.651 "dhchap_key": "key1", 00:24:06.651 "dhchap_ctrlr_key": "ckey2", 00:24:06.651 "method": "bdev_nvme_attach_controller", 00:24:06.651 "req_id": 1 00:24:06.651 } 00:24:06.651 Got JSON-RPC error response 00:24:06.651 response: 00:24:06.651 { 00:24:06.651 "code": -5, 00:24:06.651 "message": "Input/output error" 00:24:06.651 } 00:24:06.651 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:06.652 rmmod nvme_tcp 00:24:06.652 rmmod nvme_fabrics 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 3462066 ']' 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 3462066 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 3462066 ']' 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 3462066 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3462066 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3462066' 00:24:06.652 killing process with pid 3462066 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 3462066 00:24:06.652 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 3462066 00:24:06.910 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:06.910 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:06.910 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:06.910 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:06.910 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:06.910 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.910 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.910 00:01:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:08.813 00:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:08.813 00:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:08.813 00:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:08.813 00:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:08.813 00:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:08.813 00:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:24:08.813 00:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:08.813 00:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:08.813 00:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:08.814 00:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:09.072 00:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:24:09.072 00:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:24:09.072 00:01:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:10.447 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:10.447 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:10.447 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:10.447 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:10.447 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:10.447 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:10.447 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:10.447 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:10.447 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:24:10.447 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:24:10.447 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:24:10.447 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:24:10.447 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:24:10.447 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:24:10.447 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:24:10.447 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:24:11.383 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:24:11.383 00:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.h3D /tmp/spdk.key-null.eo5 /tmp/spdk.key-sha256.1BG /tmp/spdk.key-sha384.CGL /tmp/spdk.key-sha512.S8H /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:24:11.383 00:01:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:12.757 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:12.757 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:12.757 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:12.757 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:12.757 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:12.757 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:12.757 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:12.757 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:12.757 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:12.757 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:24:12.757 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:24:12.757 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:24:12.757 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:24:12.757 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:24:12.757 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:24:12.757 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:24:12.757 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:24:12.757 00:24:12.757 real 0m49.946s 00:24:12.757 user 0m47.742s 00:24:12.757 sys 0m5.885s 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.757 ************************************ 00:24:12.757 END TEST nvmf_auth_host 00:24:12.757 ************************************ 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.757 ************************************ 00:24:12.757 START TEST nvmf_digest 00:24:12.757 ************************************ 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:12.757 * Looking for test storage... 00:24:12.757 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.757 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:24:12.758 00:01:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:15.288 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:15.288 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:15.288 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:15.288 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:15.288 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:15.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:24:15.288 00:24:15.288 --- 10.0.0.2 ping statistics --- 00:24:15.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.289 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:15.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:24:15.289 00:24:15.289 --- 10.0.0.1 ping statistics --- 00:24:15.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.289 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:15.289 ************************************ 00:24:15.289 START TEST nvmf_digest_clean 00:24:15.289 ************************************ 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=3471529 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 3471529 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3471529 ']' 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:15.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:15.289 00:01:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:15.289 [2024-07-25 00:01:45.546265] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:24:15.289 [2024-07-25 00:01:45.546370] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:15.289 EAL: No free 2048 kB hugepages reported on node 1 00:24:15.289 [2024-07-25 00:01:45.615996] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.289 [2024-07-25 00:01:45.734660] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:15.289 [2024-07-25 00:01:45.734734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:15.289 [2024-07-25 00:01:45.734750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:15.289 [2024-07-25 00:01:45.734765] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:15.289 [2024-07-25 00:01:45.734777] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:15.289 [2024-07-25 00:01:45.734811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:16.223 null0 00:24:16.223 [2024-07-25 00:01:46.652368] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:16.223 [2024-07-25 00:01:46.676585] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3471680 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3471680 /var/tmp/bperf.sock 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3471680 ']' 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:16.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:16.223 00:01:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:16.223 [2024-07-25 00:01:46.726198] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:24:16.223 [2024-07-25 00:01:46.726283] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3471680 ] 00:24:16.223 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.223 [2024-07-25 00:01:46.792284] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.481 [2024-07-25 00:01:46.912320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.415 00:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:17.415 00:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:17.415 00:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:17.415 00:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:17.415 00:01:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:17.673 00:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:17.673 00:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:17.931 nvme0n1 00:24:17.931 00:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:17.931 00:01:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:18.188 Running I/O for 2 seconds... 00:24:20.085 00:24:20.085 Latency(us) 00:24:20.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.085 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:20.085 nvme0n1 : 2.01 18014.15 70.37 0.00 0.00 7097.59 3810.80 16214.09 00:24:20.085 =================================================================================================================== 00:24:20.085 Total : 18014.15 70.37 0.00 0.00 7097.59 3810.80 16214.09 00:24:20.085 0 00:24:20.085 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:20.085 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:20.085 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:20.085 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:20.085 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:20.085 | select(.opcode=="crc32c") 00:24:20.085 | "\(.module_name) \(.executed)"' 00:24:20.342 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:20.342 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:20.342 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:20.342 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:20.342 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3471680 00:24:20.342 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3471680 ']' 00:24:20.342 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3471680 00:24:20.342 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:20.342 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:20.342 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3471680 00:24:20.342 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:20.342 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:20.342 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3471680' 00:24:20.342 killing process with pid 3471680 00:24:20.343 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3471680 00:24:20.343 Received shutdown signal, test time was about 2.000000 seconds 00:24:20.343 00:24:20.343 Latency(us) 00:24:20.343 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.343 =================================================================================================================== 00:24:20.343 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.343 00:01:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3471680 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3472215 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3472215 /var/tmp/bperf.sock 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3472215 ']' 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:20.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:20.908 00:01:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:20.908 [2024-07-25 00:01:51.269613] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:24:20.908 [2024-07-25 00:01:51.269689] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472215 ] 00:24:20.908 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:20.908 Zero copy mechanism will not be used. 00:24:20.908 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.908 [2024-07-25 00:01:51.331106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.908 [2024-07-25 00:01:51.449074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.839 00:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:21.839 00:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:21.839 00:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:21.839 00:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:21.839 00:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:22.097 00:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:22.097 00:01:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:22.683 nvme0n1 00:24:22.683 00:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:22.683 00:01:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:22.683 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:22.683 Zero copy mechanism will not be used. 00:24:22.683 Running I/O for 2 seconds... 00:24:24.581 00:24:24.581 Latency(us) 00:24:24.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.581 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:24.581 nvme0n1 : 2.00 3652.63 456.58 0.00 0.00 4376.00 1450.29 7475.96 00:24:24.581 =================================================================================================================== 00:24:24.581 Total : 3652.63 456.58 0.00 0.00 4376.00 1450.29 7475.96 00:24:24.581 0 00:24:24.581 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:24.581 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:24.581 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:24.581 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:24.581 | select(.opcode=="crc32c") 00:24:24.581 | "\(.module_name) \(.executed)"' 00:24:24.581 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:24.839 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:24.839 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:24.839 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:24.839 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:24.839 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3472215 00:24:24.839 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3472215 ']' 00:24:24.839 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3472215 00:24:24.839 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:24.839 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:24.839 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3472215 00:24:25.096 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:25.096 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:25.096 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3472215' 00:24:25.096 killing process with pid 3472215 00:24:25.096 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3472215 00:24:25.096 Received shutdown signal, test time was about 2.000000 seconds 00:24:25.096 00:24:25.096 Latency(us) 00:24:25.096 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.096 =================================================================================================================== 00:24:25.096 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.096 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3472215 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3472755 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3472755 /var/tmp/bperf.sock 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3472755 ']' 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:25.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:25.354 00:01:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:25.354 [2024-07-25 00:01:55.769824] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:24:25.354 [2024-07-25 00:01:55.769903] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3472755 ] 00:24:25.354 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.354 [2024-07-25 00:01:55.831108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.354 [2024-07-25 00:01:55.944690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.288 00:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:26.288 00:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:26.288 00:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:26.288 00:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:26.288 00:01:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:26.546 00:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:26.546 00:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:26.803 nvme0n1 00:24:27.061 00:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:27.061 00:01:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:27.061 Running I/O for 2 seconds... 00:24:28.959 00:24:28.959 Latency(us) 00:24:28.959 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.959 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:28.959 nvme0n1 : 2.01 19070.66 74.49 0.00 0.00 6695.84 6116.69 15437.37 00:24:28.959 =================================================================================================================== 00:24:28.959 Total : 19070.66 74.49 0.00 0.00 6695.84 6116.69 15437.37 00:24:28.959 0 00:24:28.959 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:28.959 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:28.959 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:28.959 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:28.959 | select(.opcode=="crc32c") 00:24:28.959 | "\(.module_name) \(.executed)"' 00:24:28.959 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:29.217 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:29.218 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:29.218 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:29.218 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:29.218 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3472755 00:24:29.218 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3472755 ']' 00:24:29.218 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3472755 00:24:29.218 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:29.218 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:29.218 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3472755 00:24:29.476 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:29.476 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:29.476 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3472755' 00:24:29.476 killing process with pid 3472755 00:24:29.476 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3472755 00:24:29.476 Received shutdown signal, test time was about 2.000000 seconds 00:24:29.476 00:24:29.476 Latency(us) 00:24:29.476 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:29.476 =================================================================================================================== 00:24:29.476 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:29.476 00:01:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3472755 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3473286 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3473286 /var/tmp/bperf.sock 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 3473286 ']' 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:29.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:29.734 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:29.734 [2024-07-25 00:02:00.153927] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:24:29.734 [2024-07-25 00:02:00.154004] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473286 ] 00:24:29.734 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:29.734 Zero copy mechanism will not be used. 00:24:29.734 EAL: No free 2048 kB hugepages reported on node 1 00:24:29.734 [2024-07-25 00:02:00.216138] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.734 [2024-07-25 00:02:00.333571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.992 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:29.992 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:24:29.992 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:29.992 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:29.992 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:30.250 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:30.250 00:02:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:30.815 nvme0n1 00:24:30.815 00:02:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:30.815 00:02:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:30.815 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:30.815 Zero copy mechanism will not be used. 00:24:30.815 Running I/O for 2 seconds... 00:24:32.712 00:24:32.712 Latency(us) 00:24:32.712 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:32.712 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:32.712 nvme0n1 : 2.00 3686.23 460.78 0.00 0.00 4330.35 3179.71 11019.76 00:24:32.712 =================================================================================================================== 00:24:32.712 Total : 3686.23 460.78 0.00 0.00 4330.35 3179.71 11019.76 00:24:32.712 0 00:24:32.712 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:32.712 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:32.712 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:32.712 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:32.712 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:32.712 | select(.opcode=="crc32c") 00:24:32.712 | "\(.module_name) \(.executed)"' 00:24:32.970 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:32.970 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:32.970 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:32.970 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:32.970 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3473286 00:24:32.970 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3473286 ']' 00:24:32.970 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3473286 00:24:32.970 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:32.970 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:32.970 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3473286 00:24:33.227 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:33.227 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:33.227 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3473286' 00:24:33.227 killing process with pid 3473286 00:24:33.227 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3473286 00:24:33.227 Received shutdown signal, test time was about 2.000000 seconds 00:24:33.227 00:24:33.227 Latency(us) 00:24:33.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.227 =================================================================================================================== 00:24:33.227 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.227 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3473286 00:24:33.485 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3471529 00:24:33.485 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 3471529 ']' 00:24:33.485 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 3471529 00:24:33.485 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:24:33.485 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:33.485 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3471529 00:24:33.485 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:33.485 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:33.485 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3471529' 00:24:33.485 killing process with pid 3471529 00:24:33.485 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 3471529 00:24:33.485 00:02:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 3471529 00:24:33.743 00:24:33.743 real 0m18.692s 00:24:33.743 user 0m37.451s 00:24:33.743 sys 0m4.267s 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:33.743 ************************************ 00:24:33.743 END TEST nvmf_digest_clean 00:24:33.743 ************************************ 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:33.743 ************************************ 00:24:33.743 START TEST nvmf_digest_error 00:24:33.743 ************************************ 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=3473729 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 3473729 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3473729 ']' 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:33.743 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:33.743 [2024-07-25 00:02:04.281488] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:24:33.743 [2024-07-25 00:02:04.281559] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.743 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.743 [2024-07-25 00:02:04.343032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.001 [2024-07-25 00:02:04.448380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.001 [2024-07-25 00:02:04.448435] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.001 [2024-07-25 00:02:04.448464] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.001 [2024-07-25 00:02:04.448475] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.001 [2024-07-25 00:02:04.448485] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.001 [2024-07-25 00:02:04.448512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.001 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.001 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:34.001 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:34.001 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:34.001 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.001 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.001 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:34.001 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.001 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.001 [2024-07-25 00:02:04.517075] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:34.001 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.001 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:34.001 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:34.001 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.001 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.259 null0 00:24:34.259 [2024-07-25 00:02:04.633148] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:34.259 [2024-07-25 00:02:04.657397] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3473872 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3473872 /var/tmp/bperf.sock 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3473872 ']' 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:34.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.259 00:02:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.259 [2024-07-25 00:02:04.703137] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:24:34.259 [2024-07-25 00:02:04.703218] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3473872 ] 00:24:34.259 EAL: No free 2048 kB hugepages reported on node 1 00:24:34.259 [2024-07-25 00:02:04.767081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.517 [2024-07-25 00:02:04.886170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.518 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:34.518 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:34.518 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:34.518 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:34.775 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:34.775 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:34.775 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:34.775 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:34.775 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:34.775 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:35.033 nvme0n1 00:24:35.033 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:35.033 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.033 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:35.033 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.033 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:35.033 00:02:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:35.291 Running I/O for 2 seconds... 00:24:35.291 [2024-07-25 00:02:05.734700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.291 [2024-07-25 00:02:05.734754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.291 [2024-07-25 00:02:05.734776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.291 [2024-07-25 00:02:05.752576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.291 [2024-07-25 00:02:05.752614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.291 [2024-07-25 00:02:05.752634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.291 [2024-07-25 00:02:05.764801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.291 [2024-07-25 00:02:05.764837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13898 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.291 [2024-07-25 00:02:05.764857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.291 [2024-07-25 00:02:05.780358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.292 [2024-07-25 00:02:05.780388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.292 [2024-07-25 00:02:05.780404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.292 [2024-07-25 00:02:05.791006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.292 [2024-07-25 00:02:05.791041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.292 [2024-07-25 00:02:05.791062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.292 [2024-07-25 00:02:05.807190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.292 [2024-07-25 00:02:05.807225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.292 [2024-07-25 00:02:05.807253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.292 [2024-07-25 00:02:05.822050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.292 [2024-07-25 00:02:05.822085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.292 [2024-07-25 00:02:05.822114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.292 [2024-07-25 00:02:05.834621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.292 [2024-07-25 00:02:05.834655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.292 [2024-07-25 00:02:05.834675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.292 [2024-07-25 00:02:05.851199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.292 [2024-07-25 00:02:05.851233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.292 [2024-07-25 00:02:05.851264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.292 [2024-07-25 00:02:05.865750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.292 [2024-07-25 00:02:05.865781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.292 [2024-07-25 00:02:05.865798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.292 [2024-07-25 00:02:05.877693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.292 [2024-07-25 00:02:05.877727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.292 [2024-07-25 00:02:05.877746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.292 [2024-07-25 00:02:05.893560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.292 [2024-07-25 00:02:05.893591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.292 [2024-07-25 00:02:05.893625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.550 [2024-07-25 00:02:05.907390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.550 [2024-07-25 00:02:05.907423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.550 [2024-07-25 00:02:05.907441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:05.919673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:05.919708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:05.919728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:05.933817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:05.933851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:05.933870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:05.947342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:05.947378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:05.947396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:05.960009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:05.960043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:05.960062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:05.974437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:05.974468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:05.974486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:05.987324] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:05.987351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:05.987368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:06.001580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:06.001624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:06.001643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:06.014038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:06.014072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:06.014092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:06.029002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:06.029036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:06.029055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:06.042997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:06.043031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:06.043051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:06.056415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:06.056445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:06.056462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:06.068224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:06.068266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:06.068288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:06.083765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:06.083799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:06.083818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:06.094888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:06.094921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:06.094940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:06.108980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:06.109013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:06.109032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:06.124557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:06.124598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:06.124617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:06.139942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:06.139976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:06.139996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.551 [2024-07-25 00:02:06.151411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.551 [2024-07-25 00:02:06.151439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.551 [2024-07-25 00:02:06.151454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.167042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.167077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.167096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.178827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.178861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.178885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.194066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.194100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.194119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.209735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.209783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:8719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.209802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.221311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.221342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.221359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.234030] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.234063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.234082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.249513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.249558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.249577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.262997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.263030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.263049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.275556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.275591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.275609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.293411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.293439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.293455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.304427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.304460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.304476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.320629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.320664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.320683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.338040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.338074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.338092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.352905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.352939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:23686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.352958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.367935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.367970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.367989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.384976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.385012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.385031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.396509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.396536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.396572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:35.810 [2024-07-25 00:02:06.411854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:35.810 [2024-07-25 00:02:06.411888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:35.810 [2024-07-25 00:02:06.411907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.068 [2024-07-25 00:02:06.428905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.068 [2024-07-25 00:02:06.428940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.428959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.440954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.440989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.441008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.455690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.455723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:3305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.455743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.467750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.467785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.467804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.483483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.483510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.483527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.499887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.499921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.499940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.511180] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.511214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.511233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.524531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.524565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.524585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.540312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.540340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.540355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.551684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.551717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.551741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.565722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.565755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.565774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.581045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.581078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.581097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.592446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.592475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.592490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.607069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.607102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.607121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.621742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.621776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.621795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.633027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.633061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.633081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.647941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.647974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.647993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.662326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.662355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.662371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.069 [2024-07-25 00:02:06.674855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.069 [2024-07-25 00:02:06.674889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.069 [2024-07-25 00:02:06.674909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.327 [2024-07-25 00:02:06.688485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.327 [2024-07-25 00:02:06.688527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.327 [2024-07-25 00:02:06.688544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.327 [2024-07-25 00:02:06.702087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.327 [2024-07-25 00:02:06.702120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.327 [2024-07-25 00:02:06.702139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.327 [2024-07-25 00:02:06.715145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.327 [2024-07-25 00:02:06.715180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.327 [2024-07-25 00:02:06.715199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.327 [2024-07-25 00:02:06.729637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.327 [2024-07-25 00:02:06.729673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:12550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.327 [2024-07-25 00:02:06.729692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.327 [2024-07-25 00:02:06.742953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.327 [2024-07-25 00:02:06.742988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:9605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.327 [2024-07-25 00:02:06.743007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.327 [2024-07-25 00:02:06.755213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.327 [2024-07-25 00:02:06.755255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.327 [2024-07-25 00:02:06.755292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.327 [2024-07-25 00:02:06.770597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.327 [2024-07-25 00:02:06.770632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.327 [2024-07-25 00:02:06.770651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.327 [2024-07-25 00:02:06.786230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.327 [2024-07-25 00:02:06.786272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.327 [2024-07-25 00:02:06.786320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.327 [2024-07-25 00:02:06.798208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.327 [2024-07-25 00:02:06.798249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.327 [2024-07-25 00:02:06.798270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.327 [2024-07-25 00:02:06.814693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.327 [2024-07-25 00:02:06.814728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.328 [2024-07-25 00:02:06.814747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.328 [2024-07-25 00:02:06.828217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.328 [2024-07-25 00:02:06.828264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.328 [2024-07-25 00:02:06.828302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.328 [2024-07-25 00:02:06.842095] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.328 [2024-07-25 00:02:06.842129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.328 [2024-07-25 00:02:06.842149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.328 [2024-07-25 00:02:06.855997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.328 [2024-07-25 00:02:06.856031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.328 [2024-07-25 00:02:06.856050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.328 [2024-07-25 00:02:06.870235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.328 [2024-07-25 00:02:06.870293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.328 [2024-07-25 00:02:06.870310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.328 [2024-07-25 00:02:06.882884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.328 [2024-07-25 00:02:06.882917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.328 [2024-07-25 00:02:06.882936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.328 [2024-07-25 00:02:06.898816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.328 [2024-07-25 00:02:06.898849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.328 [2024-07-25 00:02:06.898867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.328 [2024-07-25 00:02:06.910628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.328 [2024-07-25 00:02:06.910667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.328 [2024-07-25 00:02:06.910688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.328 [2024-07-25 00:02:06.928320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.328 [2024-07-25 00:02:06.928350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.328 [2024-07-25 00:02:06.928367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:06.945162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:06.945195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:06.945214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:06.961655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:06.961688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3592 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:06.961707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:06.976994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:06.977029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:06.977049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:06.989129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:06.989163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:06.989182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:07.005609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:07.005643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:07.005663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:07.021506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:07.021552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:07.021569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:07.034696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:07.034730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:07.034748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:07.046684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:07.046718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:07.046737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:07.062074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:07.062106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:07.062125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:07.073081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:07.073116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:07.073135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:07.087989] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:07.088022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:07.088041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:07.101910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:07.101943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:07.101962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:07.114781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:07.114816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:07.114834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:07.130665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:07.130700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:15618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:07.130719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:07.142151] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:07.142184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:07.142203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:07.157953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:07.157987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:07.158012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:07.172257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:07.172305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:07.172323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.586 [2024-07-25 00:02:07.185353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.586 [2024-07-25 00:02:07.185397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.586 [2024-07-25 00:02:07.185413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.844 [2024-07-25 00:02:07.201959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.844 [2024-07-25 00:02:07.201992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.844 [2024-07-25 00:02:07.202011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.844 [2024-07-25 00:02:07.217841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.844 [2024-07-25 00:02:07.217875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.844 [2024-07-25 00:02:07.217909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.844 [2024-07-25 00:02:07.230519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.844 [2024-07-25 00:02:07.230563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.844 [2024-07-25 00:02:07.230583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.844 [2024-07-25 00:02:07.247353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.844 [2024-07-25 00:02:07.247380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.844 [2024-07-25 00:02:07.247396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.844 [2024-07-25 00:02:07.263802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.844 [2024-07-25 00:02:07.263836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.844 [2024-07-25 00:02:07.263855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.844 [2024-07-25 00:02:07.277803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.844 [2024-07-25 00:02:07.277837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.844 [2024-07-25 00:02:07.277856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.844 [2024-07-25 00:02:07.290749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.844 [2024-07-25 00:02:07.290788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.844 [2024-07-25 00:02:07.290808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.844 [2024-07-25 00:02:07.305802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.844 [2024-07-25 00:02:07.305836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.844 [2024-07-25 00:02:07.305854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.844 [2024-07-25 00:02:07.320250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.844 [2024-07-25 00:02:07.320298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:10244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.844 [2024-07-25 00:02:07.320315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.844 [2024-07-25 00:02:07.332603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.844 [2024-07-25 00:02:07.332638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:24946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.844 [2024-07-25 00:02:07.332656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.844 [2024-07-25 00:02:07.347851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.844 [2024-07-25 00:02:07.347885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.845 [2024-07-25 00:02:07.347904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.845 [2024-07-25 00:02:07.360383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.845 [2024-07-25 00:02:07.360411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.845 [2024-07-25 00:02:07.360427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.845 [2024-07-25 00:02:07.377217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.845 [2024-07-25 00:02:07.377259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.845 [2024-07-25 00:02:07.377280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.845 [2024-07-25 00:02:07.392518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.845 [2024-07-25 00:02:07.392566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.845 [2024-07-25 00:02:07.392585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.845 [2024-07-25 00:02:07.404133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.845 [2024-07-25 00:02:07.404166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.845 [2024-07-25 00:02:07.404185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.845 [2024-07-25 00:02:07.419972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.845 [2024-07-25 00:02:07.420007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.845 [2024-07-25 00:02:07.420027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.845 [2024-07-25 00:02:07.436816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.845 [2024-07-25 00:02:07.436850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:14084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.845 [2024-07-25 00:02:07.436869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:36.845 [2024-07-25 00:02:07.454594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:36.845 [2024-07-25 00:02:07.454629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:36.845 [2024-07-25 00:02:07.454648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.102 [2024-07-25 00:02:07.466458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.102 [2024-07-25 00:02:07.466485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.102 [2024-07-25 00:02:07.466500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.102 [2024-07-25 00:02:07.481554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.102 [2024-07-25 00:02:07.481588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.102 [2024-07-25 00:02:07.481608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.102 [2024-07-25 00:02:07.497309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.102 [2024-07-25 00:02:07.497353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:2179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.102 [2024-07-25 00:02:07.497369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.102 [2024-07-25 00:02:07.508905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.102 [2024-07-25 00:02:07.508938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.102 [2024-07-25 00:02:07.508958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.102 [2024-07-25 00:02:07.525125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.102 [2024-07-25 00:02:07.525160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.102 [2024-07-25 00:02:07.525179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.102 [2024-07-25 00:02:07.542267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.102 [2024-07-25 00:02:07.542312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.102 [2024-07-25 00:02:07.542334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.102 [2024-07-25 00:02:07.553317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.102 [2024-07-25 00:02:07.553361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.102 [2024-07-25 00:02:07.553378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.102 [2024-07-25 00:02:07.569187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.102 [2024-07-25 00:02:07.569221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.102 [2024-07-25 00:02:07.569250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.102 [2024-07-25 00:02:07.585763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.103 [2024-07-25 00:02:07.585798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.103 [2024-07-25 00:02:07.585818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.103 [2024-07-25 00:02:07.599821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.103 [2024-07-25 00:02:07.599854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.103 [2024-07-25 00:02:07.599873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.103 [2024-07-25 00:02:07.611756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.103 [2024-07-25 00:02:07.611790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.103 [2024-07-25 00:02:07.611809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.103 [2024-07-25 00:02:07.625894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.103 [2024-07-25 00:02:07.625928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.103 [2024-07-25 00:02:07.625948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.103 [2024-07-25 00:02:07.638795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.103 [2024-07-25 00:02:07.638829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.103 [2024-07-25 00:02:07.638848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.103 [2024-07-25 00:02:07.652890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.103 [2024-07-25 00:02:07.652925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.103 [2024-07-25 00:02:07.652944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.103 [2024-07-25 00:02:07.664228] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.103 [2024-07-25 00:02:07.664268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.103 [2024-07-25 00:02:07.664301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.103 [2024-07-25 00:02:07.678179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.103 [2024-07-25 00:02:07.678214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.103 [2024-07-25 00:02:07.678233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.103 [2024-07-25 00:02:07.693874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.103 [2024-07-25 00:02:07.693908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.103 [2024-07-25 00:02:07.693925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.103 [2024-07-25 00:02:07.706433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.103 [2024-07-25 00:02:07.706462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:8600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.103 [2024-07-25 00:02:07.706494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.361 [2024-07-25 00:02:07.720216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbffcb0) 00:24:37.361 [2024-07-25 00:02:07.720268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:37.361 [2024-07-25 00:02:07.720303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:37.361 00:24:37.361 Latency(us) 00:24:37.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.361 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:37.361 nvme0n1 : 2.00 17992.65 70.28 0.00 0.00 7105.39 3422.44 24369.68 00:24:37.361 =================================================================================================================== 00:24:37.361 Total : 17992.65 70.28 0.00 0.00 7105.39 3422.44 24369.68 00:24:37.361 0 00:24:37.361 00:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:37.361 00:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:37.361 00:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:37.361 00:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:37.361 | .driver_specific 00:24:37.361 | .nvme_error 00:24:37.361 | .status_code 00:24:37.361 | .command_transient_transport_error' 00:24:37.619 00:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:24:37.619 00:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3473872 00:24:37.619 00:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3473872 ']' 00:24:37.619 00:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3473872 00:24:37.619 00:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:37.619 00:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:37.619 00:02:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3473872 00:24:37.619 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:37.619 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:37.619 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3473872' 00:24:37.619 killing process with pid 3473872 00:24:37.619 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3473872 00:24:37.619 Received shutdown signal, test time was about 2.000000 seconds 00:24:37.619 00:24:37.619 Latency(us) 00:24:37.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.619 =================================================================================================================== 00:24:37.619 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:37.619 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3473872 00:24:37.877 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:37.877 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:37.877 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:37.877 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:37.877 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:37.877 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3474281 00:24:37.877 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:37.877 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3474281 /var/tmp/bperf.sock 00:24:37.877 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3474281 ']' 00:24:37.877 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:37.877 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:37.877 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:37.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:37.877 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:37.877 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:37.878 [2024-07-25 00:02:08.314523] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:24:37.878 [2024-07-25 00:02:08.314624] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474281 ] 00:24:37.878 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:37.878 Zero copy mechanism will not be used. 00:24:37.878 EAL: No free 2048 kB hugepages reported on node 1 00:24:37.878 [2024-07-25 00:02:08.372276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.878 [2024-07-25 00:02:08.480030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.135 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:38.135 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:38.135 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:38.135 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:38.393 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:38.393 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.393 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:38.393 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.393 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:38.393 00:02:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:38.651 nvme0n1 00:24:38.651 00:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:38.651 00:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:38.651 00:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:38.651 00:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:38.651 00:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:38.651 00:02:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:38.909 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:38.909 Zero copy mechanism will not be used. 00:24:38.909 Running I/O for 2 seconds... 00:24:38.909 [2024-07-25 00:02:09.332209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.909 [2024-07-25 00:02:09.332276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.909 [2024-07-25 00:02:09.332321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.909 [2024-07-25 00:02:09.341750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.909 [2024-07-25 00:02:09.341787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.909 [2024-07-25 00:02:09.341807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.909 [2024-07-25 00:02:09.351326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.351357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.351389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.360562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.360612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.360642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.369739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.369775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.369795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.379496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.379551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.379567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.389188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.389224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.389252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.398902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.398937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.398957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.408025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.408061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.408080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.417673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.417709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.417728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.427608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.427644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.427663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.437219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.437261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.437296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.446439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.446468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.446498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.455935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.455969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.455988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.465063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.465097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.465115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.474624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.474659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.474678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.484682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.484717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.484736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.493816] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.493851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.493870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.503084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.503119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.503139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.910 [2024-07-25 00:02:09.512394] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:38.910 [2024-07-25 00:02:09.512441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.910 [2024-07-25 00:02:09.512457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.169 [2024-07-25 00:02:09.521810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.169 [2024-07-25 00:02:09.521846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.169 [2024-07-25 00:02:09.521882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.169 [2024-07-25 00:02:09.531347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.169 [2024-07-25 00:02:09.531394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.169 [2024-07-25 00:02:09.531412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.169 [2024-07-25 00:02:09.539598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.169 [2024-07-25 00:02:09.539633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.169 [2024-07-25 00:02:09.539652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.169 [2024-07-25 00:02:09.548810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.169 [2024-07-25 00:02:09.548844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.169 [2024-07-25 00:02:09.548862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.169 [2024-07-25 00:02:09.557879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.169 [2024-07-25 00:02:09.557913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.169 [2024-07-25 00:02:09.557932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.169 [2024-07-25 00:02:09.567419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.169 [2024-07-25 00:02:09.567464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.169 [2024-07-25 00:02:09.567479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.169 [2024-07-25 00:02:09.576923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.169 [2024-07-25 00:02:09.576959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.169 [2024-07-25 00:02:09.576978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.169 [2024-07-25 00:02:09.586656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.169 [2024-07-25 00:02:09.586690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.169 [2024-07-25 00:02:09.586710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.169 [2024-07-25 00:02:09.596036] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.169 [2024-07-25 00:02:09.596071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.169 [2024-07-25 00:02:09.596090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.169 [2024-07-25 00:02:09.605397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.169 [2024-07-25 00:02:09.605433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.169 [2024-07-25 00:02:09.605452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.169 [2024-07-25 00:02:09.614641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.169 [2024-07-25 00:02:09.614675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.169 [2024-07-25 00:02:09.614694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.169 [2024-07-25 00:02:09.624366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.169 [2024-07-25 00:02:09.624397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.169 [2024-07-25 00:02:09.624428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.169 [2024-07-25 00:02:09.634194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.169 [2024-07-25 00:02:09.634229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.169 [2024-07-25 00:02:09.634257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.643604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.643638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.643657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.653433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.653464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.653496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.663207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.663250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.663271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.672703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.672738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.672756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.682269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.682316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.682333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.691511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.691542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.691559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.701099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.701133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.701151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.710787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.710823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.710842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.719516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.719577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.719597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.728866] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.728901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.728920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.738360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.738391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.738408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.747812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.747846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.747865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.757161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.757195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.757215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.766639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.766673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.766700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.170 [2024-07-25 00:02:09.776027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.170 [2024-07-25 00:02:09.776062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.170 [2024-07-25 00:02:09.776081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.785613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.785647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.785667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.793775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.793809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.793827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.802871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.802907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.802926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.812593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.812628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.812646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.822061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.822096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.822115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.831263] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.831313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.831330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.840621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.840655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.840674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.849892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.849935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.849955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.857897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.857930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.857948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.867395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.867425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.867461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.876940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.876974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.876994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.886412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.886443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.886460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.896016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.896050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.896069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.905129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.905162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.905181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.914719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.914754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.914773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.924908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.924943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.924962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.935031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.935067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.935086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.944630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.944664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.944683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.954372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.954403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.954419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.963530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.963576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.963595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.973610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.973646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.973666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.983590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.983626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.983644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:09.993999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:09.994034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.429 [2024-07-25 00:02:09.994053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.429 [2024-07-25 00:02:10.003311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.429 [2024-07-25 00:02:10.003344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.430 [2024-07-25 00:02:10.003362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.430 [2024-07-25 00:02:10.012614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.430 [2024-07-25 00:02:10.012651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.430 [2024-07-25 00:02:10.012677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.430 [2024-07-25 00:02:10.022551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.430 [2024-07-25 00:02:10.022601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.430 [2024-07-25 00:02:10.022620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.430 [2024-07-25 00:02:10.032860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.430 [2024-07-25 00:02:10.032902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.430 [2024-07-25 00:02:10.032936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.690 [2024-07-25 00:02:10.043165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.690 [2024-07-25 00:02:10.043201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.690 [2024-07-25 00:02:10.043219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.690 [2024-07-25 00:02:10.053057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.690 [2024-07-25 00:02:10.053093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.690 [2024-07-25 00:02:10.053113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.690 [2024-07-25 00:02:10.062224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.690 [2024-07-25 00:02:10.062277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.690 [2024-07-25 00:02:10.062296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.690 [2024-07-25 00:02:10.070958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.690 [2024-07-25 00:02:10.070992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.690 [2024-07-25 00:02:10.071010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.690 [2024-07-25 00:02:10.079988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.690 [2024-07-25 00:02:10.080023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.690 [2024-07-25 00:02:10.080041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.690 [2024-07-25 00:02:10.088947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.690 [2024-07-25 00:02:10.088981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.690 [2024-07-25 00:02:10.089000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.690 [2024-07-25 00:02:10.097873] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.690 [2024-07-25 00:02:10.097914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.690 [2024-07-25 00:02:10.097935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.690 [2024-07-25 00:02:10.106666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.690 [2024-07-25 00:02:10.106700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.690 [2024-07-25 00:02:10.106718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.690 [2024-07-25 00:02:10.115079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.690 [2024-07-25 00:02:10.115113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.690 [2024-07-25 00:02:10.115132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.690 [2024-07-25 00:02:10.124130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.690 [2024-07-25 00:02:10.124164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.690 [2024-07-25 00:02:10.124184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.690 [2024-07-25 00:02:10.133204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.133261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.133297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.142327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.142373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.142390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.151169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.151204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.151222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.160219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.160274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.160308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.168557] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.168602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.168628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.176998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.177033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.177051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.185660] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.185694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.185712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.193871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.193904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.193923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.202297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.202328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.202344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.210561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.210620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.210638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.218819] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.218852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.218870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.227065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.227103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.227121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.235353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.235384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.235400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.243681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.243721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.243740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.251976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.252008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.252027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.260223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.260295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.260314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.268958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.268991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.269010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.277737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.277773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.277791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.285958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.285990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.286008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.691 [2024-07-25 00:02:10.294069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.691 [2024-07-25 00:02:10.294101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.691 [2024-07-25 00:02:10.294120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.302318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.302352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.302369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.310350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.310383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.310400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.318704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.318739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.318758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.326979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.327013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.327031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.335216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.335256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.335291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.343407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.343437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.343453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.351707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.351754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.351773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.359698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.359731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.359750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.368043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.368076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.368094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.376355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.376386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.376403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.384798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.384833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.384860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.393407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.393438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.393455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.402395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.402425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.402442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.412519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.412550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.412585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.422336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.422368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.422384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.432377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.432408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.432425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.442606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.442641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.442660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.452787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.452822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.452841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.463316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.463347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.463364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.473720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.473762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.473782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.484005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.484040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.484059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.494565] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.494600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.494619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.504983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.505017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.505036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.515108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.515143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.980 [2024-07-25 00:02:10.515162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.980 [2024-07-25 00:02:10.525319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.980 [2024-07-25 00:02:10.525351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.981 [2024-07-25 00:02:10.525368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.981 [2024-07-25 00:02:10.535613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.981 [2024-07-25 00:02:10.535648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.981 [2024-07-25 00:02:10.535666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:39.981 [2024-07-25 00:02:10.545865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.981 [2024-07-25 00:02:10.545899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.981 [2024-07-25 00:02:10.545918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:39.981 [2024-07-25 00:02:10.556029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.981 [2024-07-25 00:02:10.556063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.981 [2024-07-25 00:02:10.556082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:39.981 [2024-07-25 00:02:10.566287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.981 [2024-07-25 00:02:10.566335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.981 [2024-07-25 00:02:10.566351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:39.981 [2024-07-25 00:02:10.571700] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:39.981 [2024-07-25 00:02:10.571734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:39.981 [2024-07-25 00:02:10.571751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.244 [2024-07-25 00:02:10.582221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.244 [2024-07-25 00:02:10.582284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.244 [2024-07-25 00:02:10.582309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.244 [2024-07-25 00:02:10.592616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.244 [2024-07-25 00:02:10.592651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.244 [2024-07-25 00:02:10.592669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.244 [2024-07-25 00:02:10.603043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.244 [2024-07-25 00:02:10.603077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.244 [2024-07-25 00:02:10.603096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.244 [2024-07-25 00:02:10.613697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.244 [2024-07-25 00:02:10.613732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.244 [2024-07-25 00:02:10.613750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.244 [2024-07-25 00:02:10.623801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.623837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.623856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.634184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.634219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.634238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.644842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.644877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.644903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.655155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.655190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.655208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.665593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.665628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.665647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.675771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.675806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.675825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.686293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.686323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.686340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.696551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.696599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.696619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.706761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.706796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.706814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.717011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.717046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.717064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.727122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.727158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.727177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.737379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.737408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.737439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.747959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.747994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.748013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.758473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.758502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.758517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.767755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.767790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.767809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.777568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.777599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.777631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.787068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.787103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.787121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.796666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.796701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.796720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.806061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.806096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.806115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.815580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.815615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.815641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.824733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.824768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.824787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.834118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.834153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.834172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.842626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.842662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.842681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.245 [2024-07-25 00:02:10.852391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.245 [2024-07-25 00:02:10.852424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.245 [2024-07-25 00:02:10.852441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.504 [2024-07-25 00:02:10.862087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.504 [2024-07-25 00:02:10.862122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.504 [2024-07-25 00:02:10.862141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.504 [2024-07-25 00:02:10.871176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.504 [2024-07-25 00:02:10.871211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.504 [2024-07-25 00:02:10.871229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.504 [2024-07-25 00:02:10.880634] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.504 [2024-07-25 00:02:10.880670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.504 [2024-07-25 00:02:10.880688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.504 [2024-07-25 00:02:10.890564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.504 [2024-07-25 00:02:10.890599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.504 [2024-07-25 00:02:10.890617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.504 [2024-07-25 00:02:10.899996] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.504 [2024-07-25 00:02:10.900036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.504 [2024-07-25 00:02:10.900057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.504 [2024-07-25 00:02:10.909880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.504 [2024-07-25 00:02:10.909914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.504 [2024-07-25 00:02:10.909933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.504 [2024-07-25 00:02:10.919494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.504 [2024-07-25 00:02:10.919525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.504 [2024-07-25 00:02:10.919556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.504 [2024-07-25 00:02:10.929292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.504 [2024-07-25 00:02:10.929323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.504 [2024-07-25 00:02:10.929339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.504 [2024-07-25 00:02:10.939161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.504 [2024-07-25 00:02:10.939196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.504 [2024-07-25 00:02:10.939215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.504 [2024-07-25 00:02:10.948753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.504 [2024-07-25 00:02:10.948788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.504 [2024-07-25 00:02:10.948807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.504 [2024-07-25 00:02:10.958546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.504 [2024-07-25 00:02:10.958581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.504 [2024-07-25 00:02:10.958599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.504 [2024-07-25 00:02:10.968499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.504 [2024-07-25 00:02:10.968546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.504 [2024-07-25 00:02:10.968566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:10.977733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:10.977768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:10.977787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:10.986699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:10.986734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:10.986753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:10.995737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:10.995771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:10.995791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:11.004951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:11.004986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:11.005005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:11.014672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:11.014707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:11.014726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:11.024111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:11.024146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:11.024165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:11.033514] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:11.033560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:11.033576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:11.042986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:11.043020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:11.043039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:11.052480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:11.052510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:11.052547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:11.061025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:11.061058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:11.061087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:11.070612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:11.070647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:11.070666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:11.079636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:11.079670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:11.079689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:11.089448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:11.089479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:11.089497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:11.098564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:11.098599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:11.098618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.505 [2024-07-25 00:02:11.107988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.505 [2024-07-25 00:02:11.108021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.505 [2024-07-25 00:02:11.108040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.117408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.117440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.117457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.126305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.126336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.126352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.135716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.135751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.135769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.145360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.145413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.145431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.154684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.154719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.154738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.163979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.164013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.164032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.173508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.173557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.173576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.183135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.183169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.183188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.193212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.193254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.193276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.203618] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.203652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.203671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.213645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.213680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.213699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.223479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.223510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.223526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.233586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.233621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.233640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.243016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.243051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.243071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.251945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.251980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.251999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.261164] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.261200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.261219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.269991] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.270026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.270044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.279154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.279190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.279208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.287926] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.287961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.287980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.296904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.296938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.296957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.306055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.306089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.306115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.315424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.315470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.315488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:40.764 [2024-07-25 00:02:11.324509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1b0b290) 00:24:40.764 [2024-07-25 00:02:11.324543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:40.764 [2024-07-25 00:02:11.324560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:40.764 00:24:40.765 Latency(us) 00:24:40.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.765 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:40.765 nvme0n1 : 2.00 3310.17 413.77 0.00 0.00 4828.88 1316.79 10874.12 00:24:40.765 =================================================================================================================== 00:24:40.765 Total : 3310.17 413.77 0.00 0.00 4828.88 1316.79 10874.12 00:24:40.765 0 00:24:40.765 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:40.765 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:40.765 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:40.765 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:40.765 | .driver_specific 00:24:40.765 | .nvme_error 00:24:40.765 | .status_code 00:24:40.765 | .command_transient_transport_error' 00:24:41.023 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 213 > 0 )) 00:24:41.023 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3474281 00:24:41.023 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3474281 ']' 00:24:41.023 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3474281 00:24:41.023 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:41.023 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:41.023 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3474281 00:24:41.023 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:41.023 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:41.023 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3474281' 00:24:41.023 killing process with pid 3474281 00:24:41.023 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3474281 00:24:41.023 Received shutdown signal, test time was about 2.000000 seconds 00:24:41.023 00:24:41.023 Latency(us) 00:24:41.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.023 =================================================================================================================== 00:24:41.023 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:41.023 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3474281 00:24:41.281 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:41.281 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:41.281 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:41.281 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:41.281 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:41.281 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3474694 00:24:41.281 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:41.281 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3474694 /var/tmp/bperf.sock 00:24:41.281 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3474694 ']' 00:24:41.281 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:41.281 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:41.281 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:41.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:41.281 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:41.281 00:02:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:41.540 [2024-07-25 00:02:11.914235] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:24:41.540 [2024-07-25 00:02:11.914323] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3474694 ] 00:24:41.540 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.540 [2024-07-25 00:02:11.986131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.540 [2024-07-25 00:02:12.118699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.798 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:41.798 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:41.798 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:41.798 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:42.056 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:42.056 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.056 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:42.056 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.056 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:42.056 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:42.314 nvme0n1 00:24:42.314 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:42.314 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:42.314 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:42.314 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:42.314 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:42.314 00:02:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:42.573 Running I/O for 2 seconds... 00:24:42.573 [2024-07-25 00:02:12.962223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190edd58 00:24:42.573 [2024-07-25 00:02:12.963338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:14135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.573 [2024-07-25 00:02:12.963378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.573 [2024-07-25 00:02:12.975334] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190eee38 00:24:42.573 [2024-07-25 00:02:12.976510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:20088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.573 [2024-07-25 00:02:12.976554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.573 [2024-07-25 00:02:12.987356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fa3a0 00:24:42.573 [2024-07-25 00:02:12.988519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:1411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.573 [2024-07-25 00:02:12.988547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:42.573 [2024-07-25 00:02:13.000714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e4de8 00:24:42.573 [2024-07-25 00:02:13.001935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.573 [2024-07-25 00:02:13.001967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:42.573 [2024-07-25 00:02:13.014101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ec408 00:24:42.573 [2024-07-25 00:02:13.015587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.573 [2024-07-25 00:02:13.015620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:42.573 [2024-07-25 00:02:13.027468] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e7c50 00:24:42.573 [2024-07-25 00:02:13.029048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.573 [2024-07-25 00:02:13.029081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:42.573 [2024-07-25 00:02:13.040664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fa3a0 00:24:42.573 [2024-07-25 00:02:13.042449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.573 [2024-07-25 00:02:13.042478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:42.573 [2024-07-25 00:02:13.053891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f1ca0 00:24:42.574 [2024-07-25 00:02:13.055802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.574 [2024-07-25 00:02:13.055833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:42.574 [2024-07-25 00:02:13.067261] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190edd58 00:24:42.574 [2024-07-25 00:02:13.069369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:19461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.574 [2024-07-25 00:02:13.069412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:42.574 [2024-07-25 00:02:13.076429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190eaef0 00:24:42.574 [2024-07-25 00:02:13.077364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.574 [2024-07-25 00:02:13.077391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.574 [2024-07-25 00:02:13.090967] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e1f80 00:24:42.574 [2024-07-25 00:02:13.092602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.574 [2024-07-25 00:02:13.092633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.574 [2024-07-25 00:02:13.104345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ecc78 00:24:42.574 [2024-07-25 00:02:13.106070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.574 [2024-07-25 00:02:13.106101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.574 [2024-07-25 00:02:13.117568] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e95a0 00:24:42.574 [2024-07-25 00:02:13.119476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.574 [2024-07-25 00:02:13.119504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.574 [2024-07-25 00:02:13.130858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f7970 00:24:42.574 [2024-07-25 00:02:13.132909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.574 [2024-07-25 00:02:13.132941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.574 [2024-07-25 00:02:13.139872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f4298 00:24:42.574 [2024-07-25 00:02:13.140746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:15689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.574 [2024-07-25 00:02:13.140777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:42.574 [2024-07-25 00:02:13.153120] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fda78 00:24:42.574 [2024-07-25 00:02:13.154178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:9191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.574 [2024-07-25 00:02:13.154210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.574 [2024-07-25 00:02:13.164631] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e4de8 00:24:42.574 [2024-07-25 00:02:13.165679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:14441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.574 [2024-07-25 00:02:13.165709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.574 [2024-07-25 00:02:13.177934] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f6020 00:24:42.574 [2024-07-25 00:02:13.179137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.574 [2024-07-25 00:02:13.179169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.832 [2024-07-25 00:02:13.191180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f5be8 00:24:42.832 [2024-07-25 00:02:13.192588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.832 [2024-07-25 00:02:13.192619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.832 [2024-07-25 00:02:13.203075] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f3e60 00:24:42.832 [2024-07-25 00:02:13.203948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.832 [2024-07-25 00:02:13.203979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.217177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fdeb0 00:24:42.833 [2024-07-25 00:02:13.218735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.218764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.230432] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e4de8 00:24:42.833 [2024-07-25 00:02:13.232152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.232184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.243635] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e9168 00:24:42.833 [2024-07-25 00:02:13.245597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.245628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.255486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190de8a8 00:24:42.833 [2024-07-25 00:02:13.256860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.256899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.267004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ee190 00:24:42.833 [2024-07-25 00:02:13.268930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:15520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.268961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.277925] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e1f80 00:24:42.833 [2024-07-25 00:02:13.278784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.278815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.291255] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ecc78 00:24:42.833 [2024-07-25 00:02:13.292282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.292326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.304604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190de038 00:24:42.833 [2024-07-25 00:02:13.305796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.305827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.317867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ddc00 00:24:42.833 [2024-07-25 00:02:13.319255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.319287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.331118] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fa7d8 00:24:42.833 [2024-07-25 00:02:13.332702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.332734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.343084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ecc78 00:24:42.833 [2024-07-25 00:02:13.344160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.344192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.357180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ebb98 00:24:42.833 [2024-07-25 00:02:13.358914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:12778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.358945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.370451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e9e10 00:24:42.833 [2024-07-25 00:02:13.372445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.372472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.382281] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e6300 00:24:42.833 [2024-07-25 00:02:13.383663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.383694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.393529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f1868 00:24:42.833 [2024-07-25 00:02:13.395686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.395717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.405623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e0630 00:24:42.833 [2024-07-25 00:02:13.406520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.406550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.418659] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e7c50 00:24:42.833 [2024-07-25 00:02:13.419704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:14641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.419736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:42.833 [2024-07-25 00:02:13.431982] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f3e60 00:24:42.833 [2024-07-25 00:02:13.433190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:13674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:42.833 [2024-07-25 00:02:13.433221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:43.090 [2024-07-25 00:02:13.444755] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fdeb0 00:24:43.090 [2024-07-25 00:02:13.446040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:10569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.090 [2024-07-25 00:02:13.446070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.090 [2024-07-25 00:02:13.457439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ea680 00:24:43.090 [2024-07-25 00:02:13.458687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:23347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.090 [2024-07-25 00:02:13.458718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.090 [2024-07-25 00:02:13.469354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f4b08 00:24:43.090 [2024-07-25 00:02:13.470428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.090 [2024-07-25 00:02:13.470456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:43.090 [2024-07-25 00:02:13.482809] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e3d08 00:24:43.090 [2024-07-25 00:02:13.484224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:7223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.090 [2024-07-25 00:02:13.484268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.090 [2024-07-25 00:02:13.495833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fa3a0 00:24:43.090 [2024-07-25 00:02:13.497454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.090 [2024-07-25 00:02:13.497481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.090 [2024-07-25 00:02:13.507910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e1f80 00:24:43.090 [2024-07-25 00:02:13.509506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.090 [2024-07-25 00:02:13.509536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.521193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ee190 00:24:43.091 [2024-07-25 00:02:13.522948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.522979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.534477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e9168 00:24:43.091 [2024-07-25 00:02:13.536447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.536476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.547774] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f8618 00:24:43.091 [2024-07-25 00:02:13.549828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:23469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.549859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.556777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fbcf0 00:24:43.091 [2024-07-25 00:02:13.557631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:23248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.557661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.568732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e5658 00:24:43.091 [2024-07-25 00:02:13.569603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.569634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.582002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f7da8 00:24:43.091 [2024-07-25 00:02:13.583023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:7793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.583063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.595208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190de038 00:24:43.091 [2024-07-25 00:02:13.596460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.596489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.608467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ddc00 00:24:43.091 [2024-07-25 00:02:13.609810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.609841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.621731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f6cc8 00:24:43.091 [2024-07-25 00:02:13.623278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.623323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.633533] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f7da8 00:24:43.091 [2024-07-25 00:02:13.634641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.634674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.646397] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f5378 00:24:43.091 [2024-07-25 00:02:13.647215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.647255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.659523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fc998 00:24:43.091 [2024-07-25 00:02:13.660557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.660604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.671563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190edd58 00:24:43.091 [2024-07-25 00:02:13.673429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.673463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.682470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f2510 00:24:43.091 [2024-07-25 00:02:13.683327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.683356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:43.091 [2024-07-25 00:02:13.695668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e4de8 00:24:43.091 [2024-07-25 00:02:13.696691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:8812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.091 [2024-07-25 00:02:13.696730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:43.348 [2024-07-25 00:02:13.708917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f5378 00:24:43.348 [2024-07-25 00:02:13.710095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.348 [2024-07-25 00:02:13.710126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:43.348 [2024-07-25 00:02:13.722050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f57b0 00:24:43.348 [2024-07-25 00:02:13.723306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.348 [2024-07-25 00:02:13.723334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:43.348 [2024-07-25 00:02:13.734264] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f92c0 00:24:43.348 [2024-07-25 00:02:13.735803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.348 [2024-07-25 00:02:13.735834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:43.348 [2024-07-25 00:02:13.747556] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e4de8 00:24:43.348 [2024-07-25 00:02:13.749270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.348 [2024-07-25 00:02:13.749315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:43.348 [2024-07-25 00:02:13.760765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f1868 00:24:43.348 [2024-07-25 00:02:13.762637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.348 [2024-07-25 00:02:13.762669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.348 [2024-07-25 00:02:13.772656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e1710 00:24:43.348 [2024-07-25 00:02:13.774069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.348 [2024-07-25 00:02:13.774100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.348 [2024-07-25 00:02:13.785066] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f9b30 00:24:43.348 [2024-07-25 00:02:13.786566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.348 [2024-07-25 00:02:13.786598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.349 [2024-07-25 00:02:13.797751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e27f0 00:24:43.349 [2024-07-25 00:02:13.799143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.349 [2024-07-25 00:02:13.799174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.349 [2024-07-25 00:02:13.810431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190eaab8 00:24:43.349 [2024-07-25 00:02:13.811794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:4745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.349 [2024-07-25 00:02:13.811825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.349 [2024-07-25 00:02:13.823087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fa7d8 00:24:43.349 [2024-07-25 00:02:13.824471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:7331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.349 [2024-07-25 00:02:13.824498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.349 [2024-07-25 00:02:13.834813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ea680 00:24:43.349 [2024-07-25 00:02:13.836732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.349 [2024-07-25 00:02:13.836763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.349 [2024-07-25 00:02:13.845702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f5be8 00:24:43.349 [2024-07-25 00:02:13.846548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:11979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.349 [2024-07-25 00:02:13.846592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:43.349 [2024-07-25 00:02:13.859028] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e4578 00:24:43.349 [2024-07-25 00:02:13.860053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.349 [2024-07-25 00:02:13.860084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:43.349 [2024-07-25 00:02:13.873049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f92c0 00:24:43.349 [2024-07-25 00:02:13.874288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.349 [2024-07-25 00:02:13.874331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.349 [2024-07-25 00:02:13.886128] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e7818 00:24:43.349 [2024-07-25 00:02:13.887526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.349 [2024-07-25 00:02:13.887572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:43.349 [2024-07-25 00:02:13.898158] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fa3a0 00:24:43.349 [2024-07-25 00:02:13.899569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.349 [2024-07-25 00:02:13.899600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:43.349 [2024-07-25 00:02:13.911486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f57b0 00:24:43.349 [2024-07-25 00:02:13.913040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.349 [2024-07-25 00:02:13.913071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:43.349 [2024-07-25 00:02:13.924716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fdeb0 00:24:43.349 [2024-07-25 00:02:13.926445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:4758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.349 [2024-07-25 00:02:13.926473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:43.349 [2024-07-25 00:02:13.937954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e9e10 00:24:43.349 [2024-07-25 00:02:13.939883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.349 [2024-07-25 00:02:13.939915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.349 [2024-07-25 00:02:13.949802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f0788 00:24:43.349 [2024-07-25 00:02:13.951194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.349 [2024-07-25 00:02:13.951225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.607 [2024-07-25 00:02:13.961395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e49b0 00:24:43.607 [2024-07-25 00:02:13.963345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.607 [2024-07-25 00:02:13.963373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.607 [2024-07-25 00:02:13.972300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ee190 00:24:43.607 [2024-07-25 00:02:13.973091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:15050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.607 [2024-07-25 00:02:13.973122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:43.607 [2024-07-25 00:02:13.985528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e6738 00:24:43.607 [2024-07-25 00:02:13.986569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.607 [2024-07-25 00:02:13.986600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:43.607 [2024-07-25 00:02:13.998941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190dece0 00:24:43.607 [2024-07-25 00:02:14.000162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.607 [2024-07-25 00:02:14.000194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:43.607 [2024-07-25 00:02:14.012289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fda78 00:24:43.607 [2024-07-25 00:02:14.013668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.607 [2024-07-25 00:02:14.013699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:43.607 [2024-07-25 00:02:14.025663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e6fa8 00:24:43.607 [2024-07-25 00:02:14.027204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:3261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.607 [2024-07-25 00:02:14.027250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:43.607 [2024-07-25 00:02:14.038877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e6738 00:24:43.607 [2024-07-25 00:02:14.040567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.607 [2024-07-25 00:02:14.040599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:43.607 [2024-07-25 00:02:14.050652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e9168 00:24:43.607 [2024-07-25 00:02:14.051880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.607 [2024-07-25 00:02:14.051911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.607 [2024-07-25 00:02:14.063442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e27f0 00:24:43.607 [2024-07-25 00:02:14.064479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.608 [2024-07-25 00:02:14.064509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.608 [2024-07-25 00:02:14.075354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fd208 00:24:43.608 [2024-07-25 00:02:14.077214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:15874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.608 [2024-07-25 00:02:14.077253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.608 [2024-07-25 00:02:14.086254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f8618 00:24:43.608 [2024-07-25 00:02:14.087129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.608 [2024-07-25 00:02:14.087159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:43.608 [2024-07-25 00:02:14.100367] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190df988 00:24:43.608 [2024-07-25 00:02:14.101456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.608 [2024-07-25 00:02:14.101483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:43.608 [2024-07-25 00:02:14.112251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fdeb0 00:24:43.608 [2024-07-25 00:02:14.113269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:24250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.608 [2024-07-25 00:02:14.113315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:43.608 [2024-07-25 00:02:14.125464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fda78 00:24:43.608 [2024-07-25 00:02:14.126715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.608 [2024-07-25 00:02:14.126746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:43.608 [2024-07-25 00:02:14.138819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190dece0 00:24:43.608 [2024-07-25 00:02:14.140191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:18983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.608 [2024-07-25 00:02:14.140223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:43.608 [2024-07-25 00:02:14.152024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f4f40 00:24:43.608 [2024-07-25 00:02:14.153573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:13443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.608 [2024-07-25 00:02:14.153604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:43.608 [2024-07-25 00:02:14.165271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fdeb0 00:24:43.608 [2024-07-25 00:02:14.166983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.608 [2024-07-25 00:02:14.167014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:43.608 [2024-07-25 00:02:14.177069] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f46d0 00:24:43.608 [2024-07-25 00:02:14.178325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.608 [2024-07-25 00:02:14.178353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.608 [2024-07-25 00:02:14.191192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fcdd0 00:24:43.608 [2024-07-25 00:02:14.193120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.608 [2024-07-25 00:02:14.193151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.608 [2024-07-25 00:02:14.204473] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f6890 00:24:43.608 [2024-07-25 00:02:14.206608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.608 [2024-07-25 00:02:14.206639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.608 [2024-07-25 00:02:14.213421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ed4e8 00:24:43.608 [2024-07-25 00:02:14.214297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.608 [2024-07-25 00:02:14.214325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.226810] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e12d8 00:24:43.866 [2024-07-25 00:02:14.227828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.227859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.239621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fbcf0 00:24:43.866 [2024-07-25 00:02:14.240701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:18469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.240733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.251620] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e6738 00:24:43.866 [2024-07-25 00:02:14.252674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.252709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.265127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f7da8 00:24:43.866 [2024-07-25 00:02:14.266376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.266407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.278560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f81e0 00:24:43.866 [2024-07-25 00:02:14.279950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.279985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.291970] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e6fa8 00:24:43.866 [2024-07-25 00:02:14.293534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.293582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.303987] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e6738 00:24:43.866 [2024-07-25 00:02:14.305051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.305083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.317214] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ea680 00:24:43.866 [2024-07-25 00:02:14.318082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.318115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.330503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f9b30 00:24:43.866 [2024-07-25 00:02:14.331584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.331616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.342510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ef270 00:24:43.866 [2024-07-25 00:02:14.344386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.344414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.353365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e0ea0 00:24:43.866 [2024-07-25 00:02:14.354189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.354230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.366592] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e27f0 00:24:43.866 [2024-07-25 00:02:14.367613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.367644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.379917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ea680 00:24:43.866 [2024-07-25 00:02:14.381116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.381146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.393092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ea248 00:24:43.866 [2024-07-25 00:02:14.394507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.394551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.406364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f2d80 00:24:43.866 [2024-07-25 00:02:14.407878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.407909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:43.866 [2024-07-25 00:02:14.419557] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e27f0 00:24:43.866 [2024-07-25 00:02:14.421284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24718 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.866 [2024-07-25 00:02:14.421327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:43.867 [2024-07-25 00:02:14.432785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e5220 00:24:43.867 [2024-07-25 00:02:14.434637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.867 [2024-07-25 00:02:14.434668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:43.867 [2024-07-25 00:02:14.446040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fc998 00:24:43.867 [2024-07-25 00:02:14.448095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.867 [2024-07-25 00:02:14.448126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:43.867 [2024-07-25 00:02:14.455033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e7c50 00:24:43.867 [2024-07-25 00:02:14.455887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.867 [2024-07-25 00:02:14.455917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:43.867 [2024-07-25 00:02:14.466998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f1ca0 00:24:43.867 [2024-07-25 00:02:14.467875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:43.867 [2024-07-25 00:02:14.467906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:44.124 [2024-07-25 00:02:14.480327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f4b08 00:24:44.125 [2024-07-25 00:02:14.481372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.481400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.493154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f7da8 00:24:44.125 [2024-07-25 00:02:14.494437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:24404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.494465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.506414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f81e0 00:24:44.125 [2024-07-25 00:02:14.507731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:5871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.507762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.519632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190feb58 00:24:44.125 [2024-07-25 00:02:14.521169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.521200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.531428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f4b08 00:24:44.125 [2024-07-25 00:02:14.532504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:69 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.532533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.543901] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e1f80 00:24:44.125 [2024-07-25 00:02:14.544945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:7695 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.544975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.556862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e4de8 00:24:44.125 [2024-07-25 00:02:14.558012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.558044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.568702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e95a0 00:24:44.125 [2024-07-25 00:02:14.569866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.569898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.582718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f4b08 00:24:44.125 [2024-07-25 00:02:14.584103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:25022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.584135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.595792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e88f8 00:24:44.125 [2024-07-25 00:02:14.597376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.597404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.607804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f3a28 00:24:44.125 [2024-07-25 00:02:14.609347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.609375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.620999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f35f0 00:24:44.125 [2024-07-25 00:02:14.622691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.622722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.632858] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fa7d8 00:24:44.125 [2024-07-25 00:02:14.634109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.634141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.645649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e0a68 00:24:44.125 [2024-07-25 00:02:14.646738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.646770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.657619] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fb480 00:24:44.125 [2024-07-25 00:02:14.659461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.659489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.668423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ebb98 00:24:44.125 [2024-07-25 00:02:14.669247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.669277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.682501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f1430 00:24:44.125 [2024-07-25 00:02:14.683570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.683619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.695629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e8d30 00:24:44.125 [2024-07-25 00:02:14.696823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.696854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.708800] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ee5c8 00:24:44.125 [2024-07-25 00:02:14.710213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:9172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.710252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.720740] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ff3c8 00:24:44.125 [2024-07-25 00:02:14.722075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:5940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.722107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:44.125 [2024-07-25 00:02:14.732465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190df118 00:24:44.125 [2024-07-25 00:02:14.733393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.125 [2024-07-25 00:02:14.733421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.745325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e12d8 00:24:44.383 [2024-07-25 00:02:14.745990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.746020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.759804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190ea248 00:24:44.383 [2024-07-25 00:02:14.761557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.761602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.773084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190fac10 00:24:44.383 [2024-07-25 00:02:14.774952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.774980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.786212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e23b8 00:24:44.383 [2024-07-25 00:02:14.788337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:22966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.788365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.795224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f57b0 00:24:44.383 [2024-07-25 00:02:14.796092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.796122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.808498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190f0788 00:24:44.383 [2024-07-25 00:02:14.809580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.809611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.821867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190e4de8 00:24:44.383 [2024-07-25 00:02:14.823084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:8918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.823115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.835171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190df988 00:24:44.383 [2024-07-25 00:02:14.835416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.835445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.849009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190df988 00:24:44.383 [2024-07-25 00:02:14.849263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.849307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.862819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190df988 00:24:44.383 [2024-07-25 00:02:14.863049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.863080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.876750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190df988 00:24:44.383 [2024-07-25 00:02:14.876985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:24169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.877015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.890622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190df988 00:24:44.383 [2024-07-25 00:02:14.890854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.890885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.904610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190df988 00:24:44.383 [2024-07-25 00:02:14.904843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.904874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.918467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190df988 00:24:44.383 [2024-07-25 00:02:14.918721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.918752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.932289] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190df988 00:24:44.383 [2024-07-25 00:02:14.932552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:15303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.932597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:44.383 [2024-07-25 00:02:14.946119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e6f30) with pdu=0x2000190df988 00:24:44.383 [2024-07-25 00:02:14.946430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:44.383 [2024-07-25 00:02:14.946458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:44.383 00:24:44.383 Latency(us) 00:24:44.383 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.383 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:44.383 nvme0n1 : 2.01 19994.69 78.10 0.00 0.00 6386.57 2621.44 18155.90 00:24:44.383 =================================================================================================================== 00:24:44.383 Total : 19994.69 78.10 0.00 0.00 6386.57 2621.44 18155.90 00:24:44.383 0 00:24:44.383 00:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:44.383 00:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:44.383 00:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:44.383 | .driver_specific 00:24:44.383 | .nvme_error 00:24:44.383 | .status_code 00:24:44.383 | .command_transient_transport_error' 00:24:44.383 00:02:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:44.641 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:24:44.641 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3474694 00:24:44.641 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3474694 ']' 00:24:44.641 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3474694 00:24:44.641 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:44.641 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:44.641 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3474694 00:24:44.897 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:44.897 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:44.897 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3474694' 00:24:44.897 killing process with pid 3474694 00:24:44.897 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3474694 00:24:44.897 Received shutdown signal, test time was about 2.000000 seconds 00:24:44.897 00:24:44.897 Latency(us) 00:24:44.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.897 =================================================================================================================== 00:24:44.897 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:44.897 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3474694 00:24:45.154 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:45.154 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:45.154 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:45.154 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:45.154 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:45.154 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3475098 00:24:45.154 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:45.154 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3475098 /var/tmp/bperf.sock 00:24:45.154 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 3475098 ']' 00:24:45.154 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:45.154 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.154 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:45.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:45.154 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.155 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:45.155 [2024-07-25 00:02:15.574105] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:24:45.155 [2024-07-25 00:02:15.574190] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3475098 ] 00:24:45.155 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:45.155 Zero copy mechanism will not be used. 00:24:45.155 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.155 [2024-07-25 00:02:15.631811] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.155 [2024-07-25 00:02:15.740205] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.412 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:45.412 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:24:45.412 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:45.412 00:02:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:45.670 00:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:45.670 00:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:45.670 00:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:45.670 00:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:45.670 00:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:45.670 00:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:46.235 nvme0n1 00:24:46.235 00:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:46.235 00:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.235 00:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:46.235 00:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.235 00:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:46.235 00:02:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:46.235 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:46.235 Zero copy mechanism will not be used. 00:24:46.235 Running I/O for 2 seconds... 00:24:46.235 [2024-07-25 00:02:16.816825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.235 [2024-07-25 00:02:16.817197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.235 [2024-07-25 00:02:16.817248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.235 [2024-07-25 00:02:16.827872] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.235 [2024-07-25 00:02:16.828254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.235 [2024-07-25 00:02:16.828303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.235 [2024-07-25 00:02:16.838804] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.235 [2024-07-25 00:02:16.839168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.235 [2024-07-25 00:02:16.839202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.493 [2024-07-25 00:02:16.850514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.493 [2024-07-25 00:02:16.850895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-07-25 00:02:16.850927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.493 [2024-07-25 00:02:16.861502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.493 [2024-07-25 00:02:16.861874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.493 [2024-07-25 00:02:16.861907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.493 [2024-07-25 00:02:16.871945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.493 [2024-07-25 00:02:16.872326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:16.872381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:16.881762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:16.882108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:16.882141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:16.892317] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:16.892669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:16.892701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:16.903404] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:16.903755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:16.903788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:16.912942] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:16.913311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:16.913342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:16.922785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:16.923148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:16.923180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:16.932288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:16.932623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:16.932655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:16.940947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:16.941101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:16.941129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:16.950029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:16.950383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:16.950412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:16.958590] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:16.958983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:16.959011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:16.966670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:16.967054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:16.967081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:16.975596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:16.975938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:16.975982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:16.983950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:16.984337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:16.984367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:16.991823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:16.992190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:16.992219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:16.999691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:16.999986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:17.000014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:17.008050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:17.008388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:17.008417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:17.015698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:17.015996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:17.016025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:17.023775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:17.024146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:17.024184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:17.031307] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:17.031604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:17.031633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:17.039096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:17.039407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:17.039436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:17.046973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:17.047276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:17.047304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:17.055775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:17.056130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:17.056159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:17.064466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:17.064782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:17.064810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:17.073323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:17.073649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:17.073677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:17.082519] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:17.082944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:17.082972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:17.091879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:17.092299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:17.092329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.494 [2024-07-25 00:02:17.101007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.494 [2024-07-25 00:02:17.101427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.494 [2024-07-25 00:02:17.101456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.753 [2024-07-25 00:02:17.110282] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.753 [2024-07-25 00:02:17.110606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.753 [2024-07-25 00:02:17.110635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.753 [2024-07-25 00:02:17.119269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.753 [2024-07-25 00:02:17.119655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.753 [2024-07-25 00:02:17.119703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.753 [2024-07-25 00:02:17.128250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.753 [2024-07-25 00:02:17.128535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.753 [2024-07-25 00:02:17.128563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.753 [2024-07-25 00:02:17.137141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.753 [2024-07-25 00:02:17.137468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.753 [2024-07-25 00:02:17.137498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.753 [2024-07-25 00:02:17.145528] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.753 [2024-07-25 00:02:17.145835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.753 [2024-07-25 00:02:17.145863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.753 [2024-07-25 00:02:17.154444] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.753 [2024-07-25 00:02:17.154809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.753 [2024-07-25 00:02:17.154836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.753 [2024-07-25 00:02:17.163702] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.753 [2024-07-25 00:02:17.164096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.753 [2024-07-25 00:02:17.164124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.753 [2024-07-25 00:02:17.172627] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.753 [2024-07-25 00:02:17.172921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.753 [2024-07-25 00:02:17.172963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.753 [2024-07-25 00:02:17.181065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.753 [2024-07-25 00:02:17.181316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.753 [2024-07-25 00:02:17.181345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.753 [2024-07-25 00:02:17.190022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.753 [2024-07-25 00:02:17.190380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.753 [2024-07-25 00:02:17.190409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.753 [2024-07-25 00:02:17.199134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.753 [2024-07-25 00:02:17.199567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.753 [2024-07-25 00:02:17.199595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.753 [2024-07-25 00:02:17.208240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.753 [2024-07-25 00:02:17.208592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.753 [2024-07-25 00:02:17.208621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.753 [2024-07-25 00:02:17.217344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.217684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.217712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.225501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.225789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.225817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.234453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.234817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.234845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.243387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.243684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.243712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.251895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.252251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.252285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.259831] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.260143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.260171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.268395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.268686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.268715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.277256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.277552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.277581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.285470] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.285779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.285808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.292747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.293040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.293069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.301004] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.301271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.301303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.308674] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.308964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.308993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.316178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.316457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.316494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.323345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.323656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.323685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.330919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.331211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.331240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.338266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.338576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.338606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.345675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.345934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.345963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.353485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.353794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.353823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:46.754 [2024-07-25 00:02:17.361142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:46.754 [2024-07-25 00:02:17.361401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:46.754 [2024-07-25 00:02:17.361437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.368784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.013 [2024-07-25 00:02:17.369037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-25 00:02:17.369065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.375996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.013 [2024-07-25 00:02:17.376259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-25 00:02:17.376296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.383439] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.013 [2024-07-25 00:02:17.383721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-25 00:02:17.383749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.390625] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.013 [2024-07-25 00:02:17.390806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-25 00:02:17.390834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.398315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.013 [2024-07-25 00:02:17.398570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-25 00:02:17.398599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.406853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.013 [2024-07-25 00:02:17.407225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-25 00:02:17.407261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.415657] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.013 [2024-07-25 00:02:17.415975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-25 00:02:17.416004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.424586] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.013 [2024-07-25 00:02:17.424943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-25 00:02:17.424971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.433587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.013 [2024-07-25 00:02:17.433909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-25 00:02:17.433937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.441511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.013 [2024-07-25 00:02:17.441763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-25 00:02:17.441793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.450357] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.013 [2024-07-25 00:02:17.450640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-25 00:02:17.450668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.458973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.013 [2024-07-25 00:02:17.459305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-25 00:02:17.459345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.467813] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.013 [2024-07-25 00:02:17.468133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-25 00:02:17.468162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.475593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.013 [2024-07-25 00:02:17.475946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.013 [2024-07-25 00:02:17.475974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.013 [2024-07-25 00:02:17.484047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.484365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.484395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.492410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.492702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.492730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.500291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.500544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.500572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.507874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.508233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.508272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.515751] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.516003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.516032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.523309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.523604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.523633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.530731] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.531043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.531072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.538911] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.539163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.539192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.546053] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.546355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.546384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.553265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.553572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.553601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.560362] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.560633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.560662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.568055] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.568343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.568372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.575296] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.575626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.575654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.583119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.583429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.583458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.591145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.591458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.591494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.599090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.599431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.599460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.606523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.606810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.606838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.614332] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.614595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.614625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.014 [2024-07-25 00:02:17.622378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.014 [2024-07-25 00:02:17.622672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.014 [2024-07-25 00:02:17.622701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.629789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.630050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.630078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.636688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.637029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.637059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.644888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.645178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.645208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.652609] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.652928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.652957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.660380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.660707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.660735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.668604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.668964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.668992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.676700] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.677027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.677055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.683951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.684296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.684325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.691378] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.691676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.691705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.698567] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.698831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.698859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.705344] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.705621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.705648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.713572] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.713865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.713893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.720349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.720587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.720615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.727580] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.727903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.727931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.735725] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.735962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.735991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.743951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.744210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.744239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.751889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.752161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.752189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.759217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.759529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.759557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.766760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.767062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.767091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.774119] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.774424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.774453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.781870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.782123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.782151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.789549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.789905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.789941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.273 [2024-07-25 00:02:17.796762] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.273 [2024-07-25 00:02:17.797098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.273 [2024-07-25 00:02:17.797126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.274 [2024-07-25 00:02:17.804090] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.274 [2024-07-25 00:02:17.804391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.274 [2024-07-25 00:02:17.804420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.274 [2024-07-25 00:02:17.811690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.274 [2024-07-25 00:02:17.811970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.274 [2024-07-25 00:02:17.811998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.274 [2024-07-25 00:02:17.818713] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.274 [2024-07-25 00:02:17.819020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.274 [2024-07-25 00:02:17.819048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.274 [2024-07-25 00:02:17.826503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.274 [2024-07-25 00:02:17.826814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.274 [2024-07-25 00:02:17.826852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.274 [2024-07-25 00:02:17.833999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.274 [2024-07-25 00:02:17.834276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.274 [2024-07-25 00:02:17.834306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.274 [2024-07-25 00:02:17.841050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.274 [2024-07-25 00:02:17.841345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.274 [2024-07-25 00:02:17.841377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.274 [2024-07-25 00:02:17.848149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.274 [2024-07-25 00:02:17.848400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.274 [2024-07-25 00:02:17.848428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.274 [2024-07-25 00:02:17.855325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.274 [2024-07-25 00:02:17.855664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.274 [2024-07-25 00:02:17.855693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.274 [2024-07-25 00:02:17.862856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.274 [2024-07-25 00:02:17.863160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.274 [2024-07-25 00:02:17.863189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.274 [2024-07-25 00:02:17.869869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.274 [2024-07-25 00:02:17.870209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.274 [2024-07-25 00:02:17.870237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.274 [2024-07-25 00:02:17.877777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.274 [2024-07-25 00:02:17.878041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.274 [2024-07-25 00:02:17.878069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.532 [2024-07-25 00:02:17.885565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.532 [2024-07-25 00:02:17.885871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.532 [2024-07-25 00:02:17.885901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.532 [2024-07-25 00:02:17.893288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.893602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.893631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:17.900785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.901084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.901111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:17.908783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.909039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.909067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:17.916785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.917110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.917139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:17.924324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.924629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.924657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:17.931499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.931801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.931829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:17.938354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.938635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.938665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:17.945715] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.946023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.946052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:17.952477] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.952717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.952745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:17.959887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.960184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.960212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:17.968283] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.968584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.968612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:17.975773] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.976084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.976112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:17.983413] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.983756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.983792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:17.990418] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.990683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.990711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:17.998129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:17.998377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:17.998407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:18.005043] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:18.005324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:18.005352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:18.012483] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:18.012792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:18.012820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:18.019553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:18.019869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:18.019898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:18.026497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:18.026781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:18.026809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:18.034095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:18.034425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:18.034454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:18.041438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:18.041745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:18.041775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:18.048325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:18.048647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:18.048675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:18.055722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:18.056031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:18.056060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:18.062574] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:18.062867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:18.062895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:18.069449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:18.069761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:18.069788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:18.075895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:18.076200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.533 [2024-07-25 00:02:18.076227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.533 [2024-07-25 00:02:18.083068] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.533 [2024-07-25 00:02:18.083393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.534 [2024-07-25 00:02:18.083421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.534 [2024-07-25 00:02:18.090073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.534 [2024-07-25 00:02:18.090390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.534 [2024-07-25 00:02:18.090420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.534 [2024-07-25 00:02:18.097490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.534 [2024-07-25 00:02:18.097731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.534 [2024-07-25 00:02:18.097761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.534 [2024-07-25 00:02:18.104746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.534 [2024-07-25 00:02:18.105079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.534 [2024-07-25 00:02:18.105107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.534 [2024-07-25 00:02:18.111684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.534 [2024-07-25 00:02:18.111959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.534 [2024-07-25 00:02:18.111987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.534 [2024-07-25 00:02:18.118641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.534 [2024-07-25 00:02:18.118968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.534 [2024-07-25 00:02:18.118996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.534 [2024-07-25 00:02:18.126178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.534 [2024-07-25 00:02:18.126506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.534 [2024-07-25 00:02:18.126535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.534 [2024-07-25 00:02:18.133891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.534 [2024-07-25 00:02:18.134303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.534 [2024-07-25 00:02:18.134332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.534 [2024-07-25 00:02:18.141675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.534 [2024-07-25 00:02:18.142002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.534 [2024-07-25 00:02:18.142031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.792 [2024-07-25 00:02:18.149552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.792 [2024-07-25 00:02:18.149880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.792 [2024-07-25 00:02:18.149909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.792 [2024-07-25 00:02:18.156767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.792 [2024-07-25 00:02:18.157042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.792 [2024-07-25 00:02:18.157071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.792 [2024-07-25 00:02:18.164352] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.792 [2024-07-25 00:02:18.164642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.792 [2024-07-25 00:02:18.164671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.792 [2024-07-25 00:02:18.171647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.171967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.172001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.178737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.179042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.179071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.186145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.186341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.186370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.193425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.193711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.193739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.200637] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.200929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.200957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.207584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.207858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.207886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.215959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.216271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.216307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.223928] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.224223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.224259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.232153] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.232479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.232507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.239733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.240020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.240059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.247080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.247456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.247485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.255297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.255536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.255565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.262188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.262510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.262539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.270094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.270407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.270436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.278166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.278476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.278504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.286107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.286401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.286430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.293728] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.293999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.294027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.301793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.302111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.302144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.309732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.310011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.310040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.317401] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.317724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.317752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.324954] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.325236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.325273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.332840] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.333140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.333168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.340412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.340730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.340759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.348711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.349038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.349067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.356606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.356915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.356944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.364768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.365060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.365088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.793 [2024-07-25 00:02:18.372760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.793 [2024-07-25 00:02:18.373114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.793 [2024-07-25 00:02:18.373142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:47.794 [2024-07-25 00:02:18.380267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.794 [2024-07-25 00:02:18.380499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.794 [2024-07-25 00:02:18.380527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:47.794 [2024-07-25 00:02:18.387896] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.794 [2024-07-25 00:02:18.388176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.794 [2024-07-25 00:02:18.388205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:47.794 [2024-07-25 00:02:18.395079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:47.794 [2024-07-25 00:02:18.395389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.794 [2024-07-25 00:02:18.395417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:47.794 [2024-07-25 00:02:18.403540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.052 [2024-07-25 00:02:18.403904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.052 [2024-07-25 00:02:18.403933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.052 [2024-07-25 00:02:18.412356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.052 [2024-07-25 00:02:18.412594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.052 [2024-07-25 00:02:18.412622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.052 [2024-07-25 00:02:18.420292] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.052 [2024-07-25 00:02:18.420541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.052 [2024-07-25 00:02:18.420570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.052 [2024-07-25 00:02:18.427957] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.052 [2024-07-25 00:02:18.428209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.052 [2024-07-25 00:02:18.428237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.052 [2024-07-25 00:02:18.435097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.052 [2024-07-25 00:02:18.435371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.052 [2024-07-25 00:02:18.435399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.052 [2024-07-25 00:02:18.442978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.052 [2024-07-25 00:02:18.443157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.052 [2024-07-25 00:02:18.443186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.052 [2024-07-25 00:02:18.450817] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.052 [2024-07-25 00:02:18.451049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.052 [2024-07-25 00:02:18.451077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.052 [2024-07-25 00:02:18.459382] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.052 [2024-07-25 00:02:18.459662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.052 [2024-07-25 00:02:18.459690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.052 [2024-07-25 00:02:18.467782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.052 [2024-07-25 00:02:18.468124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.052 [2024-07-25 00:02:18.468152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.052 [2024-07-25 00:02:18.476323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.052 [2024-07-25 00:02:18.476545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.052 [2024-07-25 00:02:18.476573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.052 [2024-07-25 00:02:18.484764] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.485071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.485099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.492991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.493259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.493288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.501736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.501945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.501973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.510237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.510481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.510514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.518782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.519008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.519037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.526887] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.527128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.527156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.535524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.535832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.535862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.543429] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.543617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.543645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.550772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.551050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.551079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.559437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.559768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.559796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.567698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.567956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.567984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.576560] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.576781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.576809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.584917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.585226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.585262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.593220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.593502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.593530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.601390] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.601589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.601617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.609717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.609943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.609972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.618643] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.618805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.618833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.626945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.627163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.627191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.635257] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.635558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.635585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.643761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.644016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.644045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.652144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.652417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.652445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.053 [2024-07-25 00:02:18.660383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.053 [2024-07-25 00:02:18.660629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.053 [2024-07-25 00:02:18.660659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.311 [2024-07-25 00:02:18.668329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.668573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.668603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.675935] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.676099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.676129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.683675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.683894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.683923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.691524] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.691703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.691732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.699094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.699289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.699318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.706578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.706860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.706888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.714998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.715234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.715273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.723001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.723197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.723231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.730805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.731007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.731035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.738705] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.738956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.738985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.747060] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.747254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.747283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.754365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.754556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.754584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.761850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.762083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.762111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.769501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.769645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.769673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.776931] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.777080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.777108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.784663] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.784952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.784980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.792050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.792297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.792326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.799825] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.800043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.800071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:48.312 [2024-07-25 00:02:18.807431] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x11e7270) with pdu=0x2000190fef90 00:24:48.312 [2024-07-25 00:02:18.807611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:48.312 [2024-07-25 00:02:18.807639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:48.312 00:24:48.312 Latency(us) 00:24:48.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.312 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:48.312 nvme0n1 : 2.00 3866.62 483.33 0.00 0.00 4127.15 2839.89 13301.38 00:24:48.312 =================================================================================================================== 00:24:48.312 Total : 3866.62 483.33 0.00 0.00 4127.15 2839.89 13301.38 00:24:48.312 0 00:24:48.312 00:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:48.312 00:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:48.312 00:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:48.312 | .driver_specific 00:24:48.312 | .nvme_error 00:24:48.312 | .status_code 00:24:48.312 | .command_transient_transport_error' 00:24:48.312 00:02:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:48.570 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 250 > 0 )) 00:24:48.570 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3475098 00:24:48.570 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3475098 ']' 00:24:48.570 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3475098 00:24:48.570 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:48.570 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:48.570 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3475098 00:24:48.570 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:48.570 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:48.570 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3475098' 00:24:48.570 killing process with pid 3475098 00:24:48.570 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3475098 00:24:48.570 Received shutdown signal, test time was about 2.000000 seconds 00:24:48.570 00:24:48.570 Latency(us) 00:24:48.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:48.570 =================================================================================================================== 00:24:48.570 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:48.570 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3475098 00:24:48.827 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3473729 00:24:48.827 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 3473729 ']' 00:24:48.827 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 3473729 00:24:48.827 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:24:48.827 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:48.827 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3473729 00:24:48.827 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:48.827 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:48.827 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3473729' 00:24:48.827 killing process with pid 3473729 00:24:48.827 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 3473729 00:24:48.827 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 3473729 00:24:49.084 00:24:49.084 real 0m15.436s 00:24:49.084 user 0m30.570s 00:24:49.084 sys 0m4.192s 00:24:49.084 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:49.084 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:49.084 ************************************ 00:24:49.084 END TEST nvmf_digest_error 00:24:49.084 ************************************ 00:24:49.084 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:49.084 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:49.084 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:49.084 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:49.341 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:49.341 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:49.341 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:49.341 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:49.341 rmmod nvme_tcp 00:24:49.341 rmmod nvme_fabrics 00:24:49.341 rmmod nvme_keyring 00:24:49.341 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:49.341 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:49.341 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:49.341 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 3473729 ']' 00:24:49.341 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 3473729 00:24:49.342 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 3473729 ']' 00:24:49.342 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 3473729 00:24:49.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3473729) - No such process 00:24:49.342 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 3473729 is not found' 00:24:49.342 Process with pid 3473729 is not found 00:24:49.342 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:49.342 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:49.342 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:49.342 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:49.342 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:49.342 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.342 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:49.342 00:02:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.242 00:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:51.242 00:24:51.242 real 0m38.608s 00:24:51.242 user 1m8.958s 00:24:51.242 sys 0m9.996s 00:24:51.242 00:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:51.242 00:02:21 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:51.242 ************************************ 00:24:51.242 END TEST nvmf_digest 00:24:51.242 ************************************ 00:24:51.242 00:02:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:24:51.242 00:02:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:24:51.242 00:02:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:24:51.242 00:02:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:51.242 00:02:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:51.242 00:02:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:51.242 00:02:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.242 ************************************ 00:24:51.242 START TEST nvmf_bdevperf 00:24:51.242 ************************************ 00:24:51.242 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:51.501 * Looking for test storage... 00:24:51.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:51.501 00:02:21 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:53.401 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:53.402 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:53.402 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:53.402 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:53.402 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:53.402 00:02:23 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:53.402 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:53.402 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:53.402 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:53.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:53.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:24:53.661 00:24:53.661 --- 10.0.0.2 ping statistics --- 00:24:53.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.661 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:53.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:53.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:24:53.661 00:24:53.661 --- 10.0.0.1 ping statistics --- 00:24:53.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:53.661 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3477565 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3477565 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3477565 ']' 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:53.661 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.661 [2024-07-25 00:02:24.161500] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:24:53.661 [2024-07-25 00:02:24.161599] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.661 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.661 [2024-07-25 00:02:24.226182] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:53.919 [2024-07-25 00:02:24.336841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.919 [2024-07-25 00:02:24.336893] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.919 [2024-07-25 00:02:24.336921] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.919 [2024-07-25 00:02:24.336933] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.919 [2024-07-25 00:02:24.336943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.919 [2024-07-25 00:02:24.337030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.919 [2024-07-25 00:02:24.337336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.919 [2024-07-25 00:02:24.337341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.919 [2024-07-25 00:02:24.483868] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.919 Malloc0 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.919 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:54.177 [2024-07-25 00:02:24.542146] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:54.177 { 00:24:54.177 "params": { 00:24:54.177 "name": "Nvme$subsystem", 00:24:54.177 "trtype": "$TEST_TRANSPORT", 00:24:54.177 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:54.177 "adrfam": "ipv4", 00:24:54.177 "trsvcid": "$NVMF_PORT", 00:24:54.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:54.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:54.177 "hdgst": ${hdgst:-false}, 00:24:54.177 "ddgst": ${ddgst:-false} 00:24:54.177 }, 00:24:54.177 "method": "bdev_nvme_attach_controller" 00:24:54.177 } 00:24:54.177 EOF 00:24:54.177 )") 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:54.177 00:02:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:54.177 "params": { 00:24:54.177 "name": "Nvme1", 00:24:54.177 "trtype": "tcp", 00:24:54.177 "traddr": "10.0.0.2", 00:24:54.177 "adrfam": "ipv4", 00:24:54.177 "trsvcid": "4420", 00:24:54.177 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.177 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:54.177 "hdgst": false, 00:24:54.178 "ddgst": false 00:24:54.178 }, 00:24:54.178 "method": "bdev_nvme_attach_controller" 00:24:54.178 }' 00:24:54.178 [2024-07-25 00:02:24.591732] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:24:54.178 [2024-07-25 00:02:24.591803] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477592 ] 00:24:54.178 EAL: No free 2048 kB hugepages reported on node 1 00:24:54.178 [2024-07-25 00:02:24.651079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:54.178 [2024-07-25 00:02:24.764101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.440 Running I/O for 1 seconds... 00:24:55.373 00:24:55.373 Latency(us) 00:24:55.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.373 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:55.373 Verification LBA range: start 0x0 length 0x4000 00:24:55.373 Nvme1n1 : 1.02 8450.48 33.01 0.00 0.00 15088.14 2864.17 16117.00 00:24:55.373 =================================================================================================================== 00:24:55.373 Total : 8450.48 33.01 0.00 0.00 15088.14 2864.17 16117.00 00:24:55.938 00:02:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3477813 00:24:55.938 00:02:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:55.938 00:02:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:55.938 00:02:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:55.938 00:02:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:55.938 00:02:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:55.938 00:02:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:55.938 00:02:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:55.938 { 00:24:55.938 "params": { 00:24:55.938 "name": "Nvme$subsystem", 00:24:55.938 "trtype": "$TEST_TRANSPORT", 00:24:55.938 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:55.938 "adrfam": "ipv4", 00:24:55.938 "trsvcid": "$NVMF_PORT", 00:24:55.938 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:55.938 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:55.938 "hdgst": ${hdgst:-false}, 00:24:55.938 "ddgst": ${ddgst:-false} 00:24:55.938 }, 00:24:55.938 "method": "bdev_nvme_attach_controller" 00:24:55.938 } 00:24:55.938 EOF 00:24:55.938 )") 00:24:55.938 00:02:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:55.938 00:02:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:55.938 00:02:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:55.938 00:02:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:55.938 "params": { 00:24:55.938 "name": "Nvme1", 00:24:55.938 "trtype": "tcp", 00:24:55.938 "traddr": "10.0.0.2", 00:24:55.938 "adrfam": "ipv4", 00:24:55.938 "trsvcid": "4420", 00:24:55.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:55.938 "hdgst": false, 00:24:55.938 "ddgst": false 00:24:55.938 }, 00:24:55.938 "method": "bdev_nvme_attach_controller" 00:24:55.938 }' 00:24:55.938 [2024-07-25 00:02:26.289812] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:24:55.938 [2024-07-25 00:02:26.289889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3477813 ] 00:24:55.938 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.938 [2024-07-25 00:02:26.350200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.938 [2024-07-25 00:02:26.460948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.504 Running I/O for 15 seconds... 00:24:59.036 00:02:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3477565 00:24:59.036 00:02:29 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:59.036 [2024-07-25 00:02:29.256503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.036 [2024-07-25 00:02:29.256583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.256618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.036 [2024-07-25 00:02:29.256637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.256656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:37640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.256672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.256689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.256705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.256731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.256749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.256767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:37664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.256783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.256800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.256816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.256832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:37680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.256849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.256866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:37688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.256882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.256899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.256915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.256933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.256948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.256965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:37712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.256989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:37720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.257039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:37728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.257087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:37736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.257135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:37744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.257169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:37752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.257208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:37760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.257259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:37768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.257309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:37776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.257340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.257368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:37792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.257397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.257426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:37808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.036 [2024-07-25 00:02:29.257455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.036 [2024-07-25 00:02:29.257484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.036 [2024-07-25 00:02:29.257512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.036 [2024-07-25 00:02:29.257561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.036 [2024-07-25 00:02:29.257589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.036 [2024-07-25 00:02:29.257637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:36856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.036 [2024-07-25 00:02:29.257674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.036 [2024-07-25 00:02:29.257706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.036 [2024-07-25 00:02:29.257737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.036 [2024-07-25 00:02:29.257768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.036 [2024-07-25 00:02:29.257786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.036 [2024-07-25 00:02:29.257801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.257817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.257832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.257849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.257864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.257880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.257895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.257912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.257928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.257945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.257960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.257976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:37816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:59.037 [2024-07-25 00:02:29.257991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:36944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:37000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:37008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:37024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:37040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:37056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:37064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:37080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:37096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:37112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:37120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:37128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:37136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:37144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:37160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.258976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.258991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.259014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.259030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.259047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:37184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.259062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.037 [2024-07-25 00:02:29.259079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:37192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.037 [2024-07-25 00:02:29.259094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:37216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:37232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:37248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:37256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:37264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:37272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:37288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:37296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:37304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:37312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:37320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:37336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:37344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:37352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:37360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:37368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:37376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:37400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.259979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:37408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.259994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.260011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.260026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.260043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:37424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.260057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.260079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.260095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.260112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.260127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.260144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:37448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.260162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.260179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.260194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.260211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:37464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.260236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.260266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:37472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.260282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.260314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:37480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.260328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.260343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:37488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.260357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.260372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:37496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.260385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.260400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:37504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.038 [2024-07-25 00:02:29.260413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.038 [2024-07-25 00:02:29.260428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:37512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:37536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:37560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:37568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:37576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:37584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:37600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:37608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:37624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:59.039 [2024-07-25 00:02:29.260905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.260921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f1e8c0 is same with the state(5) to be set 00:24:59.039 [2024-07-25 00:02:29.260939] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:59.039 [2024-07-25 00:02:29.260952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:59.039 [2024-07-25 00:02:29.260964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:37632 len:8 PRP1 0x0 PRP2 0x0 00:24:59.039 [2024-07-25 00:02:29.260979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.039 [2024-07-25 00:02:29.261043] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f1e8c0 was disconnected and freed. reset controller. 00:24:59.039 [2024-07-25 00:02:29.264914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.039 [2024-07-25 00:02:29.264995] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.039 [2024-07-25 00:02:29.265773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.039 [2024-07-25 00:02:29.265804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.039 [2024-07-25 00:02:29.265821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.039 [2024-07-25 00:02:29.266078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.039 [2024-07-25 00:02:29.266346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.039 [2024-07-25 00:02:29.266369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.039 [2024-07-25 00:02:29.266386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.039 [2024-07-25 00:02:29.270038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.039 [2024-07-25 00:02:29.279250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.039 [2024-07-25 00:02:29.279696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.039 [2024-07-25 00:02:29.279724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.039 [2024-07-25 00:02:29.279756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.039 [2024-07-25 00:02:29.280017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.039 [2024-07-25 00:02:29.280274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.039 [2024-07-25 00:02:29.280297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.039 [2024-07-25 00:02:29.280313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.039 [2024-07-25 00:02:29.283895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.039 [2024-07-25 00:02:29.293202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.039 [2024-07-25 00:02:29.293597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.039 [2024-07-25 00:02:29.293629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.039 [2024-07-25 00:02:29.293647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.039 [2024-07-25 00:02:29.293887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.039 [2024-07-25 00:02:29.294131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.039 [2024-07-25 00:02:29.294154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.039 [2024-07-25 00:02:29.294170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.039 [2024-07-25 00:02:29.297762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.039 [2024-07-25 00:02:29.307075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.039 [2024-07-25 00:02:29.307519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.039 [2024-07-25 00:02:29.307550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.039 [2024-07-25 00:02:29.307574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.039 [2024-07-25 00:02:29.307815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.039 [2024-07-25 00:02:29.308059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.039 [2024-07-25 00:02:29.308082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.039 [2024-07-25 00:02:29.308098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.039 [2024-07-25 00:02:29.311696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.039 [2024-07-25 00:02:29.321005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.039 [2024-07-25 00:02:29.321435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.039 [2024-07-25 00:02:29.321467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.039 [2024-07-25 00:02:29.321485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.039 [2024-07-25 00:02:29.321725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.039 [2024-07-25 00:02:29.321967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.039 [2024-07-25 00:02:29.321990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.039 [2024-07-25 00:02:29.322006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.039 [2024-07-25 00:02:29.325597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.039 [2024-07-25 00:02:29.334916] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.039 [2024-07-25 00:02:29.335344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.039 [2024-07-25 00:02:29.335376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.039 [2024-07-25 00:02:29.335394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.040 [2024-07-25 00:02:29.335634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.040 [2024-07-25 00:02:29.335877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.040 [2024-07-25 00:02:29.335900] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.040 [2024-07-25 00:02:29.335915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.040 [2024-07-25 00:02:29.339507] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.040 [2024-07-25 00:02:29.348914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.040 [2024-07-25 00:02:29.349325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.040 [2024-07-25 00:02:29.349357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.040 [2024-07-25 00:02:29.349375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.040 [2024-07-25 00:02:29.349615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.040 [2024-07-25 00:02:29.349858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.040 [2024-07-25 00:02:29.349887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.040 [2024-07-25 00:02:29.349903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.040 [2024-07-25 00:02:29.353498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.040 [2024-07-25 00:02:29.362811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.040 [2024-07-25 00:02:29.363233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.040 [2024-07-25 00:02:29.363270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.040 [2024-07-25 00:02:29.363288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.040 [2024-07-25 00:02:29.363527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.040 [2024-07-25 00:02:29.363770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.040 [2024-07-25 00:02:29.363793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.040 [2024-07-25 00:02:29.363809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.040 [2024-07-25 00:02:29.367401] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.040 [2024-07-25 00:02:29.376708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.040 [2024-07-25 00:02:29.377112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.040 [2024-07-25 00:02:29.377143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.040 [2024-07-25 00:02:29.377161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.040 [2024-07-25 00:02:29.377411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.040 [2024-07-25 00:02:29.377655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.040 [2024-07-25 00:02:29.377678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.040 [2024-07-25 00:02:29.377693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.040 [2024-07-25 00:02:29.381284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.040 [2024-07-25 00:02:29.390593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.040 [2024-07-25 00:02:29.390988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.040 [2024-07-25 00:02:29.391019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.040 [2024-07-25 00:02:29.391036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.040 [2024-07-25 00:02:29.391287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.040 [2024-07-25 00:02:29.391531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.040 [2024-07-25 00:02:29.391554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.040 [2024-07-25 00:02:29.391569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.040 [2024-07-25 00:02:29.395151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.040 [2024-07-25 00:02:29.404509] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.040 [2024-07-25 00:02:29.404930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.040 [2024-07-25 00:02:29.404958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.040 [2024-07-25 00:02:29.404974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.040 [2024-07-25 00:02:29.405213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.040 [2024-07-25 00:02:29.405466] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.040 [2024-07-25 00:02:29.405490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.040 [2024-07-25 00:02:29.405505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.040 [2024-07-25 00:02:29.409087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.040 [2024-07-25 00:02:29.418404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.040 [2024-07-25 00:02:29.418804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.040 [2024-07-25 00:02:29.418835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.040 [2024-07-25 00:02:29.418852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.040 [2024-07-25 00:02:29.419092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.040 [2024-07-25 00:02:29.419346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.040 [2024-07-25 00:02:29.419370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.040 [2024-07-25 00:02:29.419385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.040 [2024-07-25 00:02:29.422969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.040 [2024-07-25 00:02:29.432298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.040 [2024-07-25 00:02:29.432710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.040 [2024-07-25 00:02:29.432740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.040 [2024-07-25 00:02:29.432757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.040 [2024-07-25 00:02:29.432996] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.040 [2024-07-25 00:02:29.433239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.040 [2024-07-25 00:02:29.433272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.040 [2024-07-25 00:02:29.433288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.041 [2024-07-25 00:02:29.436871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.041 [2024-07-25 00:02:29.446183] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.041 [2024-07-25 00:02:29.446592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.041 [2024-07-25 00:02:29.446624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.041 [2024-07-25 00:02:29.446642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.041 [2024-07-25 00:02:29.446888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.041 [2024-07-25 00:02:29.447131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.041 [2024-07-25 00:02:29.447154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.041 [2024-07-25 00:02:29.447170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.041 [2024-07-25 00:02:29.450764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.041 [2024-07-25 00:02:29.460074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.041 [2024-07-25 00:02:29.460481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.041 [2024-07-25 00:02:29.460512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.041 [2024-07-25 00:02:29.460530] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.041 [2024-07-25 00:02:29.460769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.041 [2024-07-25 00:02:29.461012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.041 [2024-07-25 00:02:29.461036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.041 [2024-07-25 00:02:29.461051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.041 [2024-07-25 00:02:29.464643] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.041 [2024-07-25 00:02:29.473974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.041 [2024-07-25 00:02:29.474401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.041 [2024-07-25 00:02:29.474432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.041 [2024-07-25 00:02:29.474450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.041 [2024-07-25 00:02:29.474689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.041 [2024-07-25 00:02:29.474932] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.041 [2024-07-25 00:02:29.474955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.041 [2024-07-25 00:02:29.474970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.041 [2024-07-25 00:02:29.478564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.041 [2024-07-25 00:02:29.487866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.041 [2024-07-25 00:02:29.488285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.041 [2024-07-25 00:02:29.488317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.041 [2024-07-25 00:02:29.488335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.041 [2024-07-25 00:02:29.488574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.041 [2024-07-25 00:02:29.488817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.041 [2024-07-25 00:02:29.488840] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.041 [2024-07-25 00:02:29.488861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.041 [2024-07-25 00:02:29.492456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.041 [2024-07-25 00:02:29.501767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.041 [2024-07-25 00:02:29.502201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.041 [2024-07-25 00:02:29.502233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.041 [2024-07-25 00:02:29.502263] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.041 [2024-07-25 00:02:29.502504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.041 [2024-07-25 00:02:29.502748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.041 [2024-07-25 00:02:29.502771] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.041 [2024-07-25 00:02:29.502787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.041 [2024-07-25 00:02:29.506375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.041 [2024-07-25 00:02:29.515686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.041 [2024-07-25 00:02:29.516111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.041 [2024-07-25 00:02:29.516142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.041 [2024-07-25 00:02:29.516160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.041 [2024-07-25 00:02:29.516411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.041 [2024-07-25 00:02:29.516654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.041 [2024-07-25 00:02:29.516678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.041 [2024-07-25 00:02:29.516692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.041 [2024-07-25 00:02:29.520282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.041 [2024-07-25 00:02:29.529596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.041 [2024-07-25 00:02:29.529996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.041 [2024-07-25 00:02:29.530027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.041 [2024-07-25 00:02:29.530045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.041 [2024-07-25 00:02:29.530293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.041 [2024-07-25 00:02:29.530538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.041 [2024-07-25 00:02:29.530561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.041 [2024-07-25 00:02:29.530576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.041 [2024-07-25 00:02:29.534172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.041 [2024-07-25 00:02:29.543491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.041 [2024-07-25 00:02:29.544036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.041 [2024-07-25 00:02:29.544089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.041 [2024-07-25 00:02:29.544107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.041 [2024-07-25 00:02:29.544358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.041 [2024-07-25 00:02:29.544602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.041 [2024-07-25 00:02:29.544626] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.041 [2024-07-25 00:02:29.544641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.041 [2024-07-25 00:02:29.548223] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.041 [2024-07-25 00:02:29.557540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.041 [2024-07-25 00:02:29.557967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.041 [2024-07-25 00:02:29.557998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.041 [2024-07-25 00:02:29.558016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.041 [2024-07-25 00:02:29.558265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.041 [2024-07-25 00:02:29.558509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.041 [2024-07-25 00:02:29.558532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.041 [2024-07-25 00:02:29.558548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.041 [2024-07-25 00:02:29.562129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.041 [2024-07-25 00:02:29.571457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.041 [2024-07-25 00:02:29.571896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.041 [2024-07-25 00:02:29.571927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.041 [2024-07-25 00:02:29.571945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.041 [2024-07-25 00:02:29.572185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.041 [2024-07-25 00:02:29.572437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.041 [2024-07-25 00:02:29.572460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.041 [2024-07-25 00:02:29.572473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.041 [2024-07-25 00:02:29.576105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.041 [2024-07-25 00:02:29.585428] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.042 [2024-07-25 00:02:29.585834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.042 [2024-07-25 00:02:29.585865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.042 [2024-07-25 00:02:29.585882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.042 [2024-07-25 00:02:29.586122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.042 [2024-07-25 00:02:29.586380] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.042 [2024-07-25 00:02:29.586404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.042 [2024-07-25 00:02:29.586419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.042 [2024-07-25 00:02:29.589996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.042 [2024-07-25 00:02:29.599337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.042 [2024-07-25 00:02:29.599765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.042 [2024-07-25 00:02:29.599796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.042 [2024-07-25 00:02:29.599814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.042 [2024-07-25 00:02:29.600053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.042 [2024-07-25 00:02:29.600307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.042 [2024-07-25 00:02:29.600332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.042 [2024-07-25 00:02:29.600347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.042 [2024-07-25 00:02:29.603928] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.042 [2024-07-25 00:02:29.613265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.042 [2024-07-25 00:02:29.613691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.042 [2024-07-25 00:02:29.613722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.042 [2024-07-25 00:02:29.613739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.042 [2024-07-25 00:02:29.613978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.042 [2024-07-25 00:02:29.614221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.042 [2024-07-25 00:02:29.614255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.042 [2024-07-25 00:02:29.614273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.042 [2024-07-25 00:02:29.617857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.042 [2024-07-25 00:02:29.627180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.042 [2024-07-25 00:02:29.627562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.042 [2024-07-25 00:02:29.627593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.042 [2024-07-25 00:02:29.627611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.042 [2024-07-25 00:02:29.627851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.042 [2024-07-25 00:02:29.628094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.042 [2024-07-25 00:02:29.628118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.042 [2024-07-25 00:02:29.628133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.042 [2024-07-25 00:02:29.631742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.042 [2024-07-25 00:02:29.641081] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.042 [2024-07-25 00:02:29.641492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.042 [2024-07-25 00:02:29.641524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.042 [2024-07-25 00:02:29.641542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.042 [2024-07-25 00:02:29.641781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.042 [2024-07-25 00:02:29.642024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.042 [2024-07-25 00:02:29.642047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.042 [2024-07-25 00:02:29.642063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.301 [2024-07-25 00:02:29.645658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.301 [2024-07-25 00:02:29.655008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.301 [2024-07-25 00:02:29.655393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-25 00:02:29.655425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.301 [2024-07-25 00:02:29.655443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.301 [2024-07-25 00:02:29.655682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.301 [2024-07-25 00:02:29.655925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.301 [2024-07-25 00:02:29.655956] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.301 [2024-07-25 00:02:29.655971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.301 [2024-07-25 00:02:29.659567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.301 [2024-07-25 00:02:29.668906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.301 [2024-07-25 00:02:29.669336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-25 00:02:29.669368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.301 [2024-07-25 00:02:29.669386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.301 [2024-07-25 00:02:29.669631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.301 [2024-07-25 00:02:29.669874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.301 [2024-07-25 00:02:29.669898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.301 [2024-07-25 00:02:29.669913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.301 [2024-07-25 00:02:29.673518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.301 [2024-07-25 00:02:29.682859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.301 [2024-07-25 00:02:29.683237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-25 00:02:29.683276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.301 [2024-07-25 00:02:29.683299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.301 [2024-07-25 00:02:29.683539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.301 [2024-07-25 00:02:29.683783] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.301 [2024-07-25 00:02:29.683806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.301 [2024-07-25 00:02:29.683821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.301 [2024-07-25 00:02:29.687410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.301 [2024-07-25 00:02:29.696722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.301 [2024-07-25 00:02:29.697163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-25 00:02:29.697189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.301 [2024-07-25 00:02:29.697219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.301 [2024-07-25 00:02:29.697472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.301 [2024-07-25 00:02:29.697717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.301 [2024-07-25 00:02:29.697740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.301 [2024-07-25 00:02:29.697755] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.301 [2024-07-25 00:02:29.701353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.301 [2024-07-25 00:02:29.710674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.301 [2024-07-25 00:02:29.711099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-25 00:02:29.711129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.301 [2024-07-25 00:02:29.711147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.301 [2024-07-25 00:02:29.711396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.301 [2024-07-25 00:02:29.711640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.301 [2024-07-25 00:02:29.711663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.301 [2024-07-25 00:02:29.711678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.301 [2024-07-25 00:02:29.715268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.301 [2024-07-25 00:02:29.724583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.301 [2024-07-25 00:02:29.724998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-25 00:02:29.725028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.301 [2024-07-25 00:02:29.725046] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.301 [2024-07-25 00:02:29.725296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.301 [2024-07-25 00:02:29.725539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.301 [2024-07-25 00:02:29.725568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.301 [2024-07-25 00:02:29.725584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.301 [2024-07-25 00:02:29.729174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.301 [2024-07-25 00:02:29.738504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.301 [2024-07-25 00:02:29.738922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.301 [2024-07-25 00:02:29.738953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.302 [2024-07-25 00:02:29.738971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.302 [2024-07-25 00:02:29.739210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.302 [2024-07-25 00:02:29.739463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.302 [2024-07-25 00:02:29.739487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.302 [2024-07-25 00:02:29.739503] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.302 [2024-07-25 00:02:29.743086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.302 [2024-07-25 00:02:29.752434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.302 [2024-07-25 00:02:29.752855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-25 00:02:29.752886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.302 [2024-07-25 00:02:29.752904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.302 [2024-07-25 00:02:29.753143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.302 [2024-07-25 00:02:29.753397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.302 [2024-07-25 00:02:29.753421] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.302 [2024-07-25 00:02:29.753437] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.302 [2024-07-25 00:02:29.757018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.302 [2024-07-25 00:02:29.766349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.302 [2024-07-25 00:02:29.766779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-25 00:02:29.766811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.302 [2024-07-25 00:02:29.766828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.302 [2024-07-25 00:02:29.767067] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.302 [2024-07-25 00:02:29.767324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.302 [2024-07-25 00:02:29.767348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.302 [2024-07-25 00:02:29.767363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.302 [2024-07-25 00:02:29.770951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.302 [2024-07-25 00:02:29.780297] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.302 [2024-07-25 00:02:29.780716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-25 00:02:29.780747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.302 [2024-07-25 00:02:29.780764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.302 [2024-07-25 00:02:29.781003] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.302 [2024-07-25 00:02:29.781257] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.302 [2024-07-25 00:02:29.781282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.302 [2024-07-25 00:02:29.781297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.302 [2024-07-25 00:02:29.784880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.302 [2024-07-25 00:02:29.794209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.302 [2024-07-25 00:02:29.794586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-25 00:02:29.794617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.302 [2024-07-25 00:02:29.794635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.302 [2024-07-25 00:02:29.794874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.302 [2024-07-25 00:02:29.795116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.302 [2024-07-25 00:02:29.795140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.302 [2024-07-25 00:02:29.795156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.302 [2024-07-25 00:02:29.798759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.302 [2024-07-25 00:02:29.808071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.302 [2024-07-25 00:02:29.808477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-25 00:02:29.808509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.302 [2024-07-25 00:02:29.808527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.302 [2024-07-25 00:02:29.808766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.302 [2024-07-25 00:02:29.809009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.302 [2024-07-25 00:02:29.809032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.302 [2024-07-25 00:02:29.809047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.302 [2024-07-25 00:02:29.812638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.302 [2024-07-25 00:02:29.821962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.302 [2024-07-25 00:02:29.822407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-25 00:02:29.822438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.302 [2024-07-25 00:02:29.822461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.302 [2024-07-25 00:02:29.822700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.302 [2024-07-25 00:02:29.822943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.302 [2024-07-25 00:02:29.822967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.302 [2024-07-25 00:02:29.822982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.302 [2024-07-25 00:02:29.826574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.302 [2024-07-25 00:02:29.835897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.302 [2024-07-25 00:02:29.836313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-25 00:02:29.836345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.302 [2024-07-25 00:02:29.836362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.302 [2024-07-25 00:02:29.836602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.302 [2024-07-25 00:02:29.836845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.302 [2024-07-25 00:02:29.836868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.302 [2024-07-25 00:02:29.836884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.302 [2024-07-25 00:02:29.840476] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.302 [2024-07-25 00:02:29.849788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.302 [2024-07-25 00:02:29.850195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-25 00:02:29.850225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.302 [2024-07-25 00:02:29.850251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.302 [2024-07-25 00:02:29.850494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.302 [2024-07-25 00:02:29.850736] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.302 [2024-07-25 00:02:29.850760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.302 [2024-07-25 00:02:29.850776] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.302 [2024-07-25 00:02:29.854365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.302 [2024-07-25 00:02:29.863676] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.302 [2024-07-25 00:02:29.864081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.302 [2024-07-25 00:02:29.864111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.302 [2024-07-25 00:02:29.864128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.302 [2024-07-25 00:02:29.864378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.302 [2024-07-25 00:02:29.864622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.302 [2024-07-25 00:02:29.864650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.302 [2024-07-25 00:02:29.864666] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.302 [2024-07-25 00:02:29.868255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.303 [2024-07-25 00:02:29.877567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.303 [2024-07-25 00:02:29.877959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-25 00:02:29.877990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.303 [2024-07-25 00:02:29.878008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.303 [2024-07-25 00:02:29.878257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.303 [2024-07-25 00:02:29.878501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.303 [2024-07-25 00:02:29.878525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.303 [2024-07-25 00:02:29.878540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.303 [2024-07-25 00:02:29.882124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.303 [2024-07-25 00:02:29.891443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.303 [2024-07-25 00:02:29.891812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-25 00:02:29.891843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.303 [2024-07-25 00:02:29.891861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.303 [2024-07-25 00:02:29.892100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.303 [2024-07-25 00:02:29.892354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.303 [2024-07-25 00:02:29.892378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.303 [2024-07-25 00:02:29.892393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.303 [2024-07-25 00:02:29.895974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.303 [2024-07-25 00:02:29.905302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.303 [2024-07-25 00:02:29.905728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.303 [2024-07-25 00:02:29.905759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.303 [2024-07-25 00:02:29.905776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.303 [2024-07-25 00:02:29.906015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.303 [2024-07-25 00:02:29.906269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.303 [2024-07-25 00:02:29.906293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.303 [2024-07-25 00:02:29.906309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.303 [2024-07-25 00:02:29.909893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.563 [2024-07-25 00:02:29.919247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.563 [2024-07-25 00:02:29.919681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.563 [2024-07-25 00:02:29.919712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.563 [2024-07-25 00:02:29.919729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.563 [2024-07-25 00:02:29.919968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.563 [2024-07-25 00:02:29.920211] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.563 [2024-07-25 00:02:29.920234] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.563 [2024-07-25 00:02:29.920261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.563 [2024-07-25 00:02:29.923844] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.563 [2024-07-25 00:02:29.933171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.563 [2024-07-25 00:02:29.933553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.563 [2024-07-25 00:02:29.933585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.563 [2024-07-25 00:02:29.933603] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.563 [2024-07-25 00:02:29.933843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.563 [2024-07-25 00:02:29.934086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.563 [2024-07-25 00:02:29.934109] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.563 [2024-07-25 00:02:29.934124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.563 [2024-07-25 00:02:29.937720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.563 [2024-07-25 00:02:29.947043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.563 [2024-07-25 00:02:29.947441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.563 [2024-07-25 00:02:29.947473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.563 [2024-07-25 00:02:29.947491] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.563 [2024-07-25 00:02:29.947731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.563 [2024-07-25 00:02:29.947975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.563 [2024-07-25 00:02:29.947998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.563 [2024-07-25 00:02:29.948014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.563 [2024-07-25 00:02:29.951614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.563 [2024-07-25 00:02:29.960939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.563 [2024-07-25 00:02:29.961447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.563 [2024-07-25 00:02:29.961501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.563 [2024-07-25 00:02:29.961518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.563 [2024-07-25 00:02:29.961766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.563 [2024-07-25 00:02:29.962010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.563 [2024-07-25 00:02:29.962033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.563 [2024-07-25 00:02:29.962048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.563 [2024-07-25 00:02:29.965644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.563 [2024-07-25 00:02:29.974970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.563 [2024-07-25 00:02:29.975413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.563 [2024-07-25 00:02:29.975444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.563 [2024-07-25 00:02:29.975461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.563 [2024-07-25 00:02:29.975700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.563 [2024-07-25 00:02:29.975943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.563 [2024-07-25 00:02:29.975967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.563 [2024-07-25 00:02:29.975982] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.563 [2024-07-25 00:02:29.979578] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.563 [2024-07-25 00:02:29.988936] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.563 [2024-07-25 00:02:29.989340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.563 [2024-07-25 00:02:29.989372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.563 [2024-07-25 00:02:29.989391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.563 [2024-07-25 00:02:29.989631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.563 [2024-07-25 00:02:29.989874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.563 [2024-07-25 00:02:29.989898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.563 [2024-07-25 00:02:29.989913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.563 [2024-07-25 00:02:29.993511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.563 [2024-07-25 00:02:30.002977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.563 [2024-07-25 00:02:30.003394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.563 [2024-07-25 00:02:30.003428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.563 [2024-07-25 00:02:30.003448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.563 [2024-07-25 00:02:30.003690] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.563 [2024-07-25 00:02:30.003934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.563 [2024-07-25 00:02:30.003958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.563 [2024-07-25 00:02:30.003979] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.563 [2024-07-25 00:02:30.007591] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.563 [2024-07-25 00:02:30.017150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.563 [2024-07-25 00:02:30.017617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.563 [2024-07-25 00:02:30.017656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.563 [2024-07-25 00:02:30.017681] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.563 [2024-07-25 00:02:30.017978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.563 [2024-07-25 00:02:30.018307] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.563 [2024-07-25 00:02:30.018343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.563 [2024-07-25 00:02:30.018363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.563 [2024-07-25 00:02:30.022057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.563 [2024-07-25 00:02:30.031196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.563 [2024-07-25 00:02:30.031634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.564 [2024-07-25 00:02:30.031667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.564 [2024-07-25 00:02:30.031685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.564 [2024-07-25 00:02:30.031924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.564 [2024-07-25 00:02:30.032168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.564 [2024-07-25 00:02:30.032192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.564 [2024-07-25 00:02:30.032208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.564 [2024-07-25 00:02:30.035814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.564 [2024-07-25 00:02:30.045149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.564 [2024-07-25 00:02:30.045536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.564 [2024-07-25 00:02:30.045568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.564 [2024-07-25 00:02:30.045587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.564 [2024-07-25 00:02:30.045826] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.564 [2024-07-25 00:02:30.046069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.564 [2024-07-25 00:02:30.046093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.564 [2024-07-25 00:02:30.046108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.564 [2024-07-25 00:02:30.049704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.564 [2024-07-25 00:02:30.059033] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.564 [2024-07-25 00:02:30.059420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.564 [2024-07-25 00:02:30.059459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.564 [2024-07-25 00:02:30.059478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.564 [2024-07-25 00:02:30.059718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.564 [2024-07-25 00:02:30.059962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.564 [2024-07-25 00:02:30.059985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.564 [2024-07-25 00:02:30.060001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.564 [2024-07-25 00:02:30.063596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.564 [2024-07-25 00:02:30.072915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.564 [2024-07-25 00:02:30.073367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.564 [2024-07-25 00:02:30.073399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.564 [2024-07-25 00:02:30.073418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.564 [2024-07-25 00:02:30.073657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.564 [2024-07-25 00:02:30.073901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.564 [2024-07-25 00:02:30.073924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.564 [2024-07-25 00:02:30.073940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.564 [2024-07-25 00:02:30.077535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.564 [2024-07-25 00:02:30.086860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.564 [2024-07-25 00:02:30.087257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.564 [2024-07-25 00:02:30.087289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.564 [2024-07-25 00:02:30.087306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.564 [2024-07-25 00:02:30.087545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.564 [2024-07-25 00:02:30.087789] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.564 [2024-07-25 00:02:30.087812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.564 [2024-07-25 00:02:30.087827] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.564 [2024-07-25 00:02:30.091420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.564 [2024-07-25 00:02:30.100762] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.564 [2024-07-25 00:02:30.101193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.564 [2024-07-25 00:02:30.101224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.564 [2024-07-25 00:02:30.101251] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.564 [2024-07-25 00:02:30.101493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.564 [2024-07-25 00:02:30.101743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.564 [2024-07-25 00:02:30.101768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.564 [2024-07-25 00:02:30.101783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.564 [2024-07-25 00:02:30.105383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.564 [2024-07-25 00:02:30.114724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.564 [2024-07-25 00:02:30.115158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.564 [2024-07-25 00:02:30.115188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.564 [2024-07-25 00:02:30.115205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.564 [2024-07-25 00:02:30.115452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.564 [2024-07-25 00:02:30.115697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.564 [2024-07-25 00:02:30.115720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.564 [2024-07-25 00:02:30.115735] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.564 [2024-07-25 00:02:30.119335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.564 [2024-07-25 00:02:30.128683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.564 [2024-07-25 00:02:30.129127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.564 [2024-07-25 00:02:30.129159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.564 [2024-07-25 00:02:30.129176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.564 [2024-07-25 00:02:30.129427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.564 [2024-07-25 00:02:30.129671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.564 [2024-07-25 00:02:30.129694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.564 [2024-07-25 00:02:30.129710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.564 [2024-07-25 00:02:30.133316] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.564 [2024-07-25 00:02:30.142660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.564 [2024-07-25 00:02:30.143091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.564 [2024-07-25 00:02:30.143122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.564 [2024-07-25 00:02:30.143140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.564 [2024-07-25 00:02:30.143389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.564 [2024-07-25 00:02:30.143634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.564 [2024-07-25 00:02:30.143657] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.564 [2024-07-25 00:02:30.143673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.564 [2024-07-25 00:02:30.147275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.564 [2024-07-25 00:02:30.156600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.564 [2024-07-25 00:02:30.157000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.564 [2024-07-25 00:02:30.157032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.564 [2024-07-25 00:02:30.157050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.564 [2024-07-25 00:02:30.157305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.564 [2024-07-25 00:02:30.157549] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.564 [2024-07-25 00:02:30.157573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.564 [2024-07-25 00:02:30.157588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.564 [2024-07-25 00:02:30.161176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.564 [2024-07-25 00:02:30.170505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.564 [2024-07-25 00:02:30.170929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.564 [2024-07-25 00:02:30.170960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.565 [2024-07-25 00:02:30.170978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.565 [2024-07-25 00:02:30.171218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.565 [2024-07-25 00:02:30.171471] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.565 [2024-07-25 00:02:30.171495] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.565 [2024-07-25 00:02:30.171511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.824 [2024-07-25 00:02:30.175095] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.824 [2024-07-25 00:02:30.184415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.824 [2024-07-25 00:02:30.184832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 00:02:30.184863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.824 [2024-07-25 00:02:30.184881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.824 [2024-07-25 00:02:30.185120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.824 [2024-07-25 00:02:30.185374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.824 [2024-07-25 00:02:30.185399] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.824 [2024-07-25 00:02:30.185414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.824 [2024-07-25 00:02:30.188991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.824 [2024-07-25 00:02:30.198317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.824 [2024-07-25 00:02:30.198733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 00:02:30.198764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.824 [2024-07-25 00:02:30.198787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.824 [2024-07-25 00:02:30.199028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.824 [2024-07-25 00:02:30.199284] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.824 [2024-07-25 00:02:30.199308] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.824 [2024-07-25 00:02:30.199323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.824 [2024-07-25 00:02:30.202905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.824 [2024-07-25 00:02:30.212215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.824 [2024-07-25 00:02:30.212616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 00:02:30.212647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.824 [2024-07-25 00:02:30.212664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.824 [2024-07-25 00:02:30.212903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.824 [2024-07-25 00:02:30.213146] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.824 [2024-07-25 00:02:30.213170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.824 [2024-07-25 00:02:30.213185] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.824 [2024-07-25 00:02:30.216777] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.824 [2024-07-25 00:02:30.226090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.824 [2024-07-25 00:02:30.226519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 00:02:30.226550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.824 [2024-07-25 00:02:30.226568] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.824 [2024-07-25 00:02:30.226807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.824 [2024-07-25 00:02:30.227050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.824 [2024-07-25 00:02:30.227073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.824 [2024-07-25 00:02:30.227089] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.824 [2024-07-25 00:02:30.230686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.824 [2024-07-25 00:02:30.240043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.824 [2024-07-25 00:02:30.240482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 00:02:30.240514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.824 [2024-07-25 00:02:30.240532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.824 [2024-07-25 00:02:30.240772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.824 [2024-07-25 00:02:30.241015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.824 [2024-07-25 00:02:30.241044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.824 [2024-07-25 00:02:30.241060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.824 [2024-07-25 00:02:30.244655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.824 [2024-07-25 00:02:30.253972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.824 [2024-07-25 00:02:30.254380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 00:02:30.254412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.824 [2024-07-25 00:02:30.254429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.824 [2024-07-25 00:02:30.254669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.824 [2024-07-25 00:02:30.254913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.824 [2024-07-25 00:02:30.254936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.824 [2024-07-25 00:02:30.254951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.824 [2024-07-25 00:02:30.258553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.824 [2024-07-25 00:02:30.267877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.824 [2024-07-25 00:02:30.268298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.824 [2024-07-25 00:02:30.268330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.824 [2024-07-25 00:02:30.268348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.824 [2024-07-25 00:02:30.268587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.824 [2024-07-25 00:02:30.268831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.824 [2024-07-25 00:02:30.268854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.824 [2024-07-25 00:02:30.268869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.825 [2024-07-25 00:02:30.272471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.825 [2024-07-25 00:02:30.281982] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.825 [2024-07-25 00:02:30.282363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 00:02:30.282395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.825 [2024-07-25 00:02:30.282412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.825 [2024-07-25 00:02:30.282651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.825 [2024-07-25 00:02:30.282895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.825 [2024-07-25 00:02:30.282918] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.825 [2024-07-25 00:02:30.282933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.825 [2024-07-25 00:02:30.286528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.825 [2024-07-25 00:02:30.295857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.825 [2024-07-25 00:02:30.296344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 00:02:30.296398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.825 [2024-07-25 00:02:30.296416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.825 [2024-07-25 00:02:30.296655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.825 [2024-07-25 00:02:30.296898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.825 [2024-07-25 00:02:30.296922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.825 [2024-07-25 00:02:30.296937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.825 [2024-07-25 00:02:30.300549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.825 [2024-07-25 00:02:30.309870] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.825 [2024-07-25 00:02:30.310288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 00:02:30.310326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.825 [2024-07-25 00:02:30.310345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.825 [2024-07-25 00:02:30.310585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.825 [2024-07-25 00:02:30.310829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.825 [2024-07-25 00:02:30.310852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.825 [2024-07-25 00:02:30.310868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.825 [2024-07-25 00:02:30.314463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.825 [2024-07-25 00:02:30.323773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.825 [2024-07-25 00:02:30.324168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 00:02:30.324199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.825 [2024-07-25 00:02:30.324217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.825 [2024-07-25 00:02:30.324467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.825 [2024-07-25 00:02:30.324712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.825 [2024-07-25 00:02:30.324735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.825 [2024-07-25 00:02:30.324751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.825 [2024-07-25 00:02:30.328343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.825 [2024-07-25 00:02:30.337671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.825 [2024-07-25 00:02:30.338081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 00:02:30.338113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.825 [2024-07-25 00:02:30.338130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.825 [2024-07-25 00:02:30.338387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.825 [2024-07-25 00:02:30.338632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.825 [2024-07-25 00:02:30.338656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.825 [2024-07-25 00:02:30.338671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.825 [2024-07-25 00:02:30.342258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.825 [2024-07-25 00:02:30.351570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.825 [2024-07-25 00:02:30.351966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 00:02:30.351997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.825 [2024-07-25 00:02:30.352014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.825 [2024-07-25 00:02:30.352264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.825 [2024-07-25 00:02:30.352507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.825 [2024-07-25 00:02:30.352531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.825 [2024-07-25 00:02:30.352547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.825 [2024-07-25 00:02:30.356127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.825 [2024-07-25 00:02:30.365523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.825 [2024-07-25 00:02:30.365958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 00:02:30.365989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.825 [2024-07-25 00:02:30.366007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.825 [2024-07-25 00:02:30.366256] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.825 [2024-07-25 00:02:30.366500] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.825 [2024-07-25 00:02:30.366523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.825 [2024-07-25 00:02:30.366538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.825 [2024-07-25 00:02:30.370120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.825 [2024-07-25 00:02:30.379449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.825 [2024-07-25 00:02:30.379846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 00:02:30.379878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.825 [2024-07-25 00:02:30.379895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.825 [2024-07-25 00:02:30.380135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.825 [2024-07-25 00:02:30.380389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.825 [2024-07-25 00:02:30.380413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.825 [2024-07-25 00:02:30.380434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.825 [2024-07-25 00:02:30.384018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.825 [2024-07-25 00:02:30.393339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.825 [2024-07-25 00:02:30.393753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 00:02:30.393784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.825 [2024-07-25 00:02:30.393802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.825 [2024-07-25 00:02:30.394041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.825 [2024-07-25 00:02:30.394295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.825 [2024-07-25 00:02:30.394319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.825 [2024-07-25 00:02:30.394335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.825 [2024-07-25 00:02:30.397917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.825 [2024-07-25 00:02:30.407235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.825 [2024-07-25 00:02:30.407667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.825 [2024-07-25 00:02:30.407697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.825 [2024-07-25 00:02:30.407715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.825 [2024-07-25 00:02:30.407953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.825 [2024-07-25 00:02:30.408196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.825 [2024-07-25 00:02:30.408219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.826 [2024-07-25 00:02:30.408234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.826 [2024-07-25 00:02:30.411828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.826 [2024-07-25 00:02:30.421144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:59.826 [2024-07-25 00:02:30.421570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.826 [2024-07-25 00:02:30.421601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:24:59.826 [2024-07-25 00:02:30.421619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:24:59.826 [2024-07-25 00:02:30.421857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:24:59.826 [2024-07-25 00:02:30.422099] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:59.826 [2024-07-25 00:02:30.422123] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:59.826 [2024-07-25 00:02:30.422138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:59.826 [2024-07-25 00:02:30.425729] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.085 [2024-07-25 00:02:30.435055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.085 [2024-07-25 00:02:30.435469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-25 00:02:30.435500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.085 [2024-07-25 00:02:30.435518] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.085 [2024-07-25 00:02:30.435758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.085 [2024-07-25 00:02:30.436001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.085 [2024-07-25 00:02:30.436025] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.085 [2024-07-25 00:02:30.436040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.085 [2024-07-25 00:02:30.439639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.085 [2024-07-25 00:02:30.448955] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.085 [2024-07-25 00:02:30.449378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-25 00:02:30.449409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.085 [2024-07-25 00:02:30.449427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.085 [2024-07-25 00:02:30.449665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.085 [2024-07-25 00:02:30.449909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.085 [2024-07-25 00:02:30.449932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.085 [2024-07-25 00:02:30.449948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.085 [2024-07-25 00:02:30.453543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.085 [2024-07-25 00:02:30.462865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.085 [2024-07-25 00:02:30.463350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-25 00:02:30.463381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.085 [2024-07-25 00:02:30.463398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.085 [2024-07-25 00:02:30.463637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.085 [2024-07-25 00:02:30.463881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.085 [2024-07-25 00:02:30.463904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.085 [2024-07-25 00:02:30.463920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.085 [2024-07-25 00:02:30.467514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.085 [2024-07-25 00:02:30.476831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.085 [2024-07-25 00:02:30.477301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-25 00:02:30.477332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.085 [2024-07-25 00:02:30.477350] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.085 [2024-07-25 00:02:30.477595] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.085 [2024-07-25 00:02:30.477839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.085 [2024-07-25 00:02:30.477862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.085 [2024-07-25 00:02:30.477878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.085 [2024-07-25 00:02:30.481475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.085 [2024-07-25 00:02:30.490799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.085 [2024-07-25 00:02:30.491194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-25 00:02:30.491225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.085 [2024-07-25 00:02:30.491252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.085 [2024-07-25 00:02:30.491494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.085 [2024-07-25 00:02:30.491737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.085 [2024-07-25 00:02:30.491760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.085 [2024-07-25 00:02:30.491775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.085 [2024-07-25 00:02:30.495366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.085 [2024-07-25 00:02:30.504689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.085 [2024-07-25 00:02:30.505184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-25 00:02:30.505215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.085 [2024-07-25 00:02:30.505232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.085 [2024-07-25 00:02:30.505482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.085 [2024-07-25 00:02:30.505726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.085 [2024-07-25 00:02:30.505749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.085 [2024-07-25 00:02:30.505764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.085 [2024-07-25 00:02:30.509357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.085 [2024-07-25 00:02:30.518675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.085 [2024-07-25 00:02:30.519211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-25 00:02:30.519279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.085 [2024-07-25 00:02:30.519297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.085 [2024-07-25 00:02:30.519536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.085 [2024-07-25 00:02:30.519779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.085 [2024-07-25 00:02:30.519803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.085 [2024-07-25 00:02:30.519823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.085 [2024-07-25 00:02:30.523419] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.085 [2024-07-25 00:02:30.532540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.085 [2024-07-25 00:02:30.532955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-25 00:02:30.532986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.085 [2024-07-25 00:02:30.533003] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.085 [2024-07-25 00:02:30.533252] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.086 [2024-07-25 00:02:30.533496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.086 [2024-07-25 00:02:30.533519] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.086 [2024-07-25 00:02:30.533535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.086 [2024-07-25 00:02:30.537135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.086 [2024-07-25 00:02:30.546467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.086 [2024-07-25 00:02:30.546897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-25 00:02:30.546929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.086 [2024-07-25 00:02:30.546947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.086 [2024-07-25 00:02:30.547186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.086 [2024-07-25 00:02:30.547440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.086 [2024-07-25 00:02:30.547465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.086 [2024-07-25 00:02:30.547480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.086 [2024-07-25 00:02:30.551065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.086 [2024-07-25 00:02:30.560387] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.086 [2024-07-25 00:02:30.560792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-25 00:02:30.560824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.086 [2024-07-25 00:02:30.560841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.086 [2024-07-25 00:02:30.561081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.086 [2024-07-25 00:02:30.561335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.086 [2024-07-25 00:02:30.561360] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.086 [2024-07-25 00:02:30.561375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.086 [2024-07-25 00:02:30.564959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.086 [2024-07-25 00:02:30.574284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.086 [2024-07-25 00:02:30.574699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-25 00:02:30.574735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.086 [2024-07-25 00:02:30.574753] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.086 [2024-07-25 00:02:30.574992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.086 [2024-07-25 00:02:30.575236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.086 [2024-07-25 00:02:30.575269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.086 [2024-07-25 00:02:30.575284] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.086 [2024-07-25 00:02:30.578869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.086 [2024-07-25 00:02:30.588182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.086 [2024-07-25 00:02:30.588581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-25 00:02:30.588613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.086 [2024-07-25 00:02:30.588631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.086 [2024-07-25 00:02:30.588870] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.086 [2024-07-25 00:02:30.589114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.086 [2024-07-25 00:02:30.589137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.086 [2024-07-25 00:02:30.589153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.086 [2024-07-25 00:02:30.592747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.086 [2024-07-25 00:02:30.602067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.086 [2024-07-25 00:02:30.602490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-25 00:02:30.602521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.086 [2024-07-25 00:02:30.602538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.086 [2024-07-25 00:02:30.602777] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.086 [2024-07-25 00:02:30.603020] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.086 [2024-07-25 00:02:30.603043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.086 [2024-07-25 00:02:30.603058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.086 [2024-07-25 00:02:30.606656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.086 [2024-07-25 00:02:30.615968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.086 [2024-07-25 00:02:30.616521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-25 00:02:30.616577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.086 [2024-07-25 00:02:30.616595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.086 [2024-07-25 00:02:30.616834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.086 [2024-07-25 00:02:30.617084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.086 [2024-07-25 00:02:30.617107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.086 [2024-07-25 00:02:30.617122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.086 [2024-07-25 00:02:30.620715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.086 [2024-07-25 00:02:30.629834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.086 [2024-07-25 00:02:30.630308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-25 00:02:30.630340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.086 [2024-07-25 00:02:30.630357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.086 [2024-07-25 00:02:30.630597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.086 [2024-07-25 00:02:30.630841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.086 [2024-07-25 00:02:30.630864] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.086 [2024-07-25 00:02:30.630879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.086 [2024-07-25 00:02:30.634474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.086 [2024-07-25 00:02:30.643801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.086 [2024-07-25 00:02:30.644230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-25 00:02:30.644269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.086 [2024-07-25 00:02:30.644287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.086 [2024-07-25 00:02:30.644536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.086 [2024-07-25 00:02:30.644780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.086 [2024-07-25 00:02:30.644803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.086 [2024-07-25 00:02:30.644818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.086 [2024-07-25 00:02:30.648412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.086 [2024-07-25 00:02:30.657734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.086 [2024-07-25 00:02:30.658137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-25 00:02:30.658167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.086 [2024-07-25 00:02:30.658185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.086 [2024-07-25 00:02:30.658435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.086 [2024-07-25 00:02:30.658680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.086 [2024-07-25 00:02:30.658703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.086 [2024-07-25 00:02:30.658719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.086 [2024-07-25 00:02:30.662321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.086 [2024-07-25 00:02:30.671634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.086 [2024-07-25 00:02:30.672028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-25 00:02:30.672059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.086 [2024-07-25 00:02:30.672077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.086 [2024-07-25 00:02:30.672327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.086 [2024-07-25 00:02:30.672572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.087 [2024-07-25 00:02:30.672595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.087 [2024-07-25 00:02:30.672610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.087 [2024-07-25 00:02:30.676193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.087 [2024-07-25 00:02:30.685540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.087 [2024-07-25 00:02:30.685961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-25 00:02:30.685992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.087 [2024-07-25 00:02:30.686010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.087 [2024-07-25 00:02:30.686259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.087 [2024-07-25 00:02:30.686503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.087 [2024-07-25 00:02:30.686528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.087 [2024-07-25 00:02:30.686543] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.087 [2024-07-25 00:02:30.690128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.346 [2024-07-25 00:02:30.699466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.346 [2024-07-25 00:02:30.699859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.346 [2024-07-25 00:02:30.699890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.346 [2024-07-25 00:02:30.699908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.346 [2024-07-25 00:02:30.700148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.346 [2024-07-25 00:02:30.700404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.346 [2024-07-25 00:02:30.700430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.346 [2024-07-25 00:02:30.700445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.346 [2024-07-25 00:02:30.704031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.346 [2024-07-25 00:02:30.713364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.346 [2024-07-25 00:02:30.713758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.346 [2024-07-25 00:02:30.713789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.346 [2024-07-25 00:02:30.713812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.346 [2024-07-25 00:02:30.714053] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.346 [2024-07-25 00:02:30.714306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.346 [2024-07-25 00:02:30.714332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.346 [2024-07-25 00:02:30.714347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.346 [2024-07-25 00:02:30.717932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.346 [2024-07-25 00:02:30.727277] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.346 [2024-07-25 00:02:30.727714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.346 [2024-07-25 00:02:30.727744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.346 [2024-07-25 00:02:30.727761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.346 [2024-07-25 00:02:30.728000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.346 [2024-07-25 00:02:30.728254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.346 [2024-07-25 00:02:30.728278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.346 [2024-07-25 00:02:30.728294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.346 [2024-07-25 00:02:30.731879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.346 [2024-07-25 00:02:30.741233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.346 [2024-07-25 00:02:30.741639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.346 [2024-07-25 00:02:30.741671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.346 [2024-07-25 00:02:30.741688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.346 [2024-07-25 00:02:30.741928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.346 [2024-07-25 00:02:30.742171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.346 [2024-07-25 00:02:30.742194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.346 [2024-07-25 00:02:30.742209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.346 [2024-07-25 00:02:30.745808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.346 [2024-07-25 00:02:30.755162] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.346 [2024-07-25 00:02:30.755569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.346 [2024-07-25 00:02:30.755601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.346 [2024-07-25 00:02:30.755618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.346 [2024-07-25 00:02:30.755857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.346 [2024-07-25 00:02:30.756100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.346 [2024-07-25 00:02:30.756130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.346 [2024-07-25 00:02:30.756146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.346 [2024-07-25 00:02:30.759761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.346 [2024-07-25 00:02:30.769133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.346 [2024-07-25 00:02:30.769544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.346 [2024-07-25 00:02:30.769575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.346 [2024-07-25 00:02:30.769593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.346 [2024-07-25 00:02:30.769832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.346 [2024-07-25 00:02:30.770075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.346 [2024-07-25 00:02:30.770098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.346 [2024-07-25 00:02:30.770113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.346 [2024-07-25 00:02:30.773724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.346 [2024-07-25 00:02:30.783062] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.346 [2024-07-25 00:02:30.783465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.346 [2024-07-25 00:02:30.783496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.346 [2024-07-25 00:02:30.783514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.346 [2024-07-25 00:02:30.783752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.346 [2024-07-25 00:02:30.783996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.346 [2024-07-25 00:02:30.784019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.346 [2024-07-25 00:02:30.784035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.346 [2024-07-25 00:02:30.787632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.346 [2024-07-25 00:02:30.796974] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.346 [2024-07-25 00:02:30.797389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.346 [2024-07-25 00:02:30.797420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.346 [2024-07-25 00:02:30.797437] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.346 [2024-07-25 00:02:30.797675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.346 [2024-07-25 00:02:30.797919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.347 [2024-07-25 00:02:30.797942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.347 [2024-07-25 00:02:30.797958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.347 [2024-07-25 00:02:30.801581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.347 [2024-07-25 00:02:30.810931] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.347 [2024-07-25 00:02:30.811328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.347 [2024-07-25 00:02:30.811360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.347 [2024-07-25 00:02:30.811377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.347 [2024-07-25 00:02:30.811616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.347 [2024-07-25 00:02:30.811860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.347 [2024-07-25 00:02:30.811883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.347 [2024-07-25 00:02:30.811899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.347 [2024-07-25 00:02:30.815524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.347 [2024-07-25 00:02:30.824872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.347 [2024-07-25 00:02:30.825282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.347 [2024-07-25 00:02:30.825314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.347 [2024-07-25 00:02:30.825332] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.347 [2024-07-25 00:02:30.825570] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.347 [2024-07-25 00:02:30.825814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.347 [2024-07-25 00:02:30.825837] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.347 [2024-07-25 00:02:30.825853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.347 [2024-07-25 00:02:30.829455] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.347 [2024-07-25 00:02:30.838822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.347 [2024-07-25 00:02:30.839232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.347 [2024-07-25 00:02:30.839272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.347 [2024-07-25 00:02:30.839291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.347 [2024-07-25 00:02:30.839530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.347 [2024-07-25 00:02:30.839773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.347 [2024-07-25 00:02:30.839797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.347 [2024-07-25 00:02:30.839812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.347 [2024-07-25 00:02:30.843409] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.347 [2024-07-25 00:02:30.852736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.347 [2024-07-25 00:02:30.853150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.347 [2024-07-25 00:02:30.853180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.347 [2024-07-25 00:02:30.853198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.347 [2024-07-25 00:02:30.853453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.347 [2024-07-25 00:02:30.853697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.347 [2024-07-25 00:02:30.853720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.347 [2024-07-25 00:02:30.853736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.347 [2024-07-25 00:02:30.857339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.347 [2024-07-25 00:02:30.866669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.347 [2024-07-25 00:02:30.867076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.347 [2024-07-25 00:02:30.867107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.347 [2024-07-25 00:02:30.867124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.347 [2024-07-25 00:02:30.867374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.347 [2024-07-25 00:02:30.867619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.347 [2024-07-25 00:02:30.867642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.347 [2024-07-25 00:02:30.867657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.347 [2024-07-25 00:02:30.871251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.347 [2024-07-25 00:02:30.880581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.347 [2024-07-25 00:02:30.881004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.347 [2024-07-25 00:02:30.881035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.347 [2024-07-25 00:02:30.881052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.347 [2024-07-25 00:02:30.881301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.347 [2024-07-25 00:02:30.881545] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.347 [2024-07-25 00:02:30.881568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.347 [2024-07-25 00:02:30.881583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.347 [2024-07-25 00:02:30.885177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.347 [2024-07-25 00:02:30.894510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.347 [2024-07-25 00:02:30.894907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.347 [2024-07-25 00:02:30.894938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.347 [2024-07-25 00:02:30.894956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.347 [2024-07-25 00:02:30.895195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.347 [2024-07-25 00:02:30.895449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.347 [2024-07-25 00:02:30.895474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.347 [2024-07-25 00:02:30.895495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.347 [2024-07-25 00:02:30.899079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.347 [2024-07-25 00:02:30.908408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.347 [2024-07-25 00:02:30.908805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.347 [2024-07-25 00:02:30.908836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.347 [2024-07-25 00:02:30.908853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.347 [2024-07-25 00:02:30.909092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.347 [2024-07-25 00:02:30.909346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.347 [2024-07-25 00:02:30.909371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.347 [2024-07-25 00:02:30.909386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.347 [2024-07-25 00:02:30.912972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.347 [2024-07-25 00:02:30.922293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.347 [2024-07-25 00:02:30.922730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.347 [2024-07-25 00:02:30.922761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.347 [2024-07-25 00:02:30.922778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.347 [2024-07-25 00:02:30.923016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.347 [2024-07-25 00:02:30.923270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.347 [2024-07-25 00:02:30.923294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.347 [2024-07-25 00:02:30.923309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.347 [2024-07-25 00:02:30.926898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.347 [2024-07-25 00:02:30.936225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.347 [2024-07-25 00:02:30.936655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.347 [2024-07-25 00:02:30.936686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.347 [2024-07-25 00:02:30.936704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.348 [2024-07-25 00:02:30.936943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.348 [2024-07-25 00:02:30.937186] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.348 [2024-07-25 00:02:30.937210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.348 [2024-07-25 00:02:30.937225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.348 [2024-07-25 00:02:30.940816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.348 [2024-07-25 00:02:30.950136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.348 [2024-07-25 00:02:30.950548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.348 [2024-07-25 00:02:30.950580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.348 [2024-07-25 00:02:30.950597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.348 [2024-07-25 00:02:30.950837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.348 [2024-07-25 00:02:30.951080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.348 [2024-07-25 00:02:30.951103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.348 [2024-07-25 00:02:30.951119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.348 [2024-07-25 00:02:30.954717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.607 [2024-07-25 00:02:30.964041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.607 [2024-07-25 00:02:30.964426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-25 00:02:30.964458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.607 [2024-07-25 00:02:30.964476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.607 [2024-07-25 00:02:30.964715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.607 [2024-07-25 00:02:30.964958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.607 [2024-07-25 00:02:30.964982] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.607 [2024-07-25 00:02:30.964998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.607 [2024-07-25 00:02:30.968599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.607 [2024-07-25 00:02:30.977925] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.607 [2024-07-25 00:02:30.978324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-25 00:02:30.978355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.607 [2024-07-25 00:02:30.978373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.607 [2024-07-25 00:02:30.978611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.607 [2024-07-25 00:02:30.978854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.607 [2024-07-25 00:02:30.978878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.607 [2024-07-25 00:02:30.978893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.607 [2024-07-25 00:02:30.982491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.607 [2024-07-25 00:02:30.991804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.607 [2024-07-25 00:02:30.992221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-25 00:02:30.992259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.607 [2024-07-25 00:02:30.992278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.607 [2024-07-25 00:02:30.992530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.607 [2024-07-25 00:02:30.992774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.607 [2024-07-25 00:02:30.992797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.607 [2024-07-25 00:02:30.992812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.607 [2024-07-25 00:02:30.996408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.607 [2024-07-25 00:02:31.005737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.607 [2024-07-25 00:02:31.006142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-25 00:02:31.006173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.607 [2024-07-25 00:02:31.006190] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.607 [2024-07-25 00:02:31.006438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.607 [2024-07-25 00:02:31.006682] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.607 [2024-07-25 00:02:31.006705] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.607 [2024-07-25 00:02:31.006721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.607 [2024-07-25 00:02:31.010313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.607 [2024-07-25 00:02:31.019633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.607 [2024-07-25 00:02:31.020051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.607 [2024-07-25 00:02:31.020081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.607 [2024-07-25 00:02:31.020098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.607 [2024-07-25 00:02:31.020347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.608 [2024-07-25 00:02:31.020591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.608 [2024-07-25 00:02:31.020614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.608 [2024-07-25 00:02:31.020630] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.608 [2024-07-25 00:02:31.024215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.608 [2024-07-25 00:02:31.033538] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.608 [2024-07-25 00:02:31.033917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-25 00:02:31.033948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.608 [2024-07-25 00:02:31.033966] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.608 [2024-07-25 00:02:31.034205] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.608 [2024-07-25 00:02:31.034457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.608 [2024-07-25 00:02:31.034481] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.608 [2024-07-25 00:02:31.034497] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.608 [2024-07-25 00:02:31.038097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.608 [2024-07-25 00:02:31.047426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.608 [2024-07-25 00:02:31.047852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-25 00:02:31.047883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.608 [2024-07-25 00:02:31.047901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.608 [2024-07-25 00:02:31.048141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.608 [2024-07-25 00:02:31.048398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.608 [2024-07-25 00:02:31.048422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.608 [2024-07-25 00:02:31.048438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.608 [2024-07-25 00:02:31.052022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.608 [2024-07-25 00:02:31.061346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.608 [2024-07-25 00:02:31.061755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-25 00:02:31.061787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.608 [2024-07-25 00:02:31.061804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.608 [2024-07-25 00:02:31.062044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.608 [2024-07-25 00:02:31.062298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.608 [2024-07-25 00:02:31.062322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.608 [2024-07-25 00:02:31.062338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.608 [2024-07-25 00:02:31.065919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.608 [2024-07-25 00:02:31.075231] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.608 [2024-07-25 00:02:31.075658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-25 00:02:31.075689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.608 [2024-07-25 00:02:31.075706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.608 [2024-07-25 00:02:31.075945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.608 [2024-07-25 00:02:31.076188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.608 [2024-07-25 00:02:31.076212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.608 [2024-07-25 00:02:31.076227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.608 [2024-07-25 00:02:31.079819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.608 [2024-07-25 00:02:31.089135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.608 [2024-07-25 00:02:31.089541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-25 00:02:31.089577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.608 [2024-07-25 00:02:31.089595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.608 [2024-07-25 00:02:31.089835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.608 [2024-07-25 00:02:31.090078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.608 [2024-07-25 00:02:31.090101] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.608 [2024-07-25 00:02:31.090117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.608 [2024-07-25 00:02:31.093708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.608 [2024-07-25 00:02:31.103026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.608 [2024-07-25 00:02:31.103427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-25 00:02:31.103458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.608 [2024-07-25 00:02:31.103475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.608 [2024-07-25 00:02:31.103714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.608 [2024-07-25 00:02:31.103957] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.608 [2024-07-25 00:02:31.103980] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.608 [2024-07-25 00:02:31.103995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.608 [2024-07-25 00:02:31.107590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.608 [2024-07-25 00:02:31.116905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.608 [2024-07-25 00:02:31.117336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-25 00:02:31.117367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.608 [2024-07-25 00:02:31.117385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.608 [2024-07-25 00:02:31.117624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.608 [2024-07-25 00:02:31.117867] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.608 [2024-07-25 00:02:31.117890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.608 [2024-07-25 00:02:31.117905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.608 [2024-07-25 00:02:31.121498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.608 [2024-07-25 00:02:31.130808] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.608 [2024-07-25 00:02:31.131229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-25 00:02:31.131267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.608 [2024-07-25 00:02:31.131286] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.608 [2024-07-25 00:02:31.131525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.608 [2024-07-25 00:02:31.131773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.608 [2024-07-25 00:02:31.131797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.608 [2024-07-25 00:02:31.131813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.608 [2024-07-25 00:02:31.135463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.608 [2024-07-25 00:02:31.144800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.608 [2024-07-25 00:02:31.145228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-25 00:02:31.145265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.608 [2024-07-25 00:02:31.145284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.608 [2024-07-25 00:02:31.145523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.608 [2024-07-25 00:02:31.145766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.608 [2024-07-25 00:02:31.145790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.608 [2024-07-25 00:02:31.145806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.608 [2024-07-25 00:02:31.149399] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.608 [2024-07-25 00:02:31.158737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.608 [2024-07-25 00:02:31.159159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.608 [2024-07-25 00:02:31.159189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.609 [2024-07-25 00:02:31.159207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.609 [2024-07-25 00:02:31.159457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.609 [2024-07-25 00:02:31.159702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.609 [2024-07-25 00:02:31.159726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.609 [2024-07-25 00:02:31.159742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.609 [2024-07-25 00:02:31.163333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.609 [2024-07-25 00:02:31.172647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.609 [2024-07-25 00:02:31.173046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-25 00:02:31.173078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.609 [2024-07-25 00:02:31.173096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.609 [2024-07-25 00:02:31.173345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.609 [2024-07-25 00:02:31.173589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.609 [2024-07-25 00:02:31.173612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.609 [2024-07-25 00:02:31.173628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.609 [2024-07-25 00:02:31.177215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.609 [2024-07-25 00:02:31.186540] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.609 [2024-07-25 00:02:31.186937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-25 00:02:31.186969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.609 [2024-07-25 00:02:31.186986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.609 [2024-07-25 00:02:31.187226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.609 [2024-07-25 00:02:31.187481] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.609 [2024-07-25 00:02:31.187505] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.609 [2024-07-25 00:02:31.187520] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.609 [2024-07-25 00:02:31.191105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.609 [2024-07-25 00:02:31.200440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.609 [2024-07-25 00:02:31.200841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-25 00:02:31.200872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.609 [2024-07-25 00:02:31.200889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.609 [2024-07-25 00:02:31.201128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.609 [2024-07-25 00:02:31.201383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.609 [2024-07-25 00:02:31.201408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.609 [2024-07-25 00:02:31.201423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.609 [2024-07-25 00:02:31.205028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.609 [2024-07-25 00:02:31.214348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.609 [2024-07-25 00:02:31.214750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.609 [2024-07-25 00:02:31.214781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.609 [2024-07-25 00:02:31.214799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.609 [2024-07-25 00:02:31.215038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.609 [2024-07-25 00:02:31.215290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.609 [2024-07-25 00:02:31.215314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.609 [2024-07-25 00:02:31.215330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.868 [2024-07-25 00:02:31.218918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.868 [2024-07-25 00:02:31.228236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.868 [2024-07-25 00:02:31.228660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.868 [2024-07-25 00:02:31.228691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.868 [2024-07-25 00:02:31.228714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.868 [2024-07-25 00:02:31.228955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.868 [2024-07-25 00:02:31.229198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.868 [2024-07-25 00:02:31.229222] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.868 [2024-07-25 00:02:31.229237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.868 [2024-07-25 00:02:31.232830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.868 [2024-07-25 00:02:31.242159] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.868 [2024-07-25 00:02:31.242538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.868 [2024-07-25 00:02:31.242569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.868 [2024-07-25 00:02:31.242587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.868 [2024-07-25 00:02:31.242827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.868 [2024-07-25 00:02:31.243070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.868 [2024-07-25 00:02:31.243093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.868 [2024-07-25 00:02:31.243109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.868 [2024-07-25 00:02:31.246702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.868 [2024-07-25 00:02:31.256019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.868 [2024-07-25 00:02:31.256426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.868 [2024-07-25 00:02:31.256457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.868 [2024-07-25 00:02:31.256475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.869 [2024-07-25 00:02:31.256714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.869 [2024-07-25 00:02:31.256958] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.869 [2024-07-25 00:02:31.256981] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.869 [2024-07-25 00:02:31.256997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.869 [2024-07-25 00:02:31.260589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.869 [2024-07-25 00:02:31.269921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.869 [2024-07-25 00:02:31.270349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.869 [2024-07-25 00:02:31.270381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.869 [2024-07-25 00:02:31.270399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.869 [2024-07-25 00:02:31.270637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.869 [2024-07-25 00:02:31.270881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.869 [2024-07-25 00:02:31.270910] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.869 [2024-07-25 00:02:31.270926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.869 [2024-07-25 00:02:31.274524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.869 [2024-07-25 00:02:31.283860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.869 [2024-07-25 00:02:31.284261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.869 [2024-07-25 00:02:31.284296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.869 [2024-07-25 00:02:31.284314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.869 [2024-07-25 00:02:31.284553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.869 [2024-07-25 00:02:31.284796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.869 [2024-07-25 00:02:31.284820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.869 [2024-07-25 00:02:31.284835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.869 [2024-07-25 00:02:31.288426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.869 [2024-07-25 00:02:31.297994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.869 [2024-07-25 00:02:31.298416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.869 [2024-07-25 00:02:31.298447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.869 [2024-07-25 00:02:31.298465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.869 [2024-07-25 00:02:31.298704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.869 [2024-07-25 00:02:31.298948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.869 [2024-07-25 00:02:31.298971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.869 [2024-07-25 00:02:31.298986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.869 [2024-07-25 00:02:31.302584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.869 [2024-07-25 00:02:31.311903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.869 [2024-07-25 00:02:31.312350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.869 [2024-07-25 00:02:31.312382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.869 [2024-07-25 00:02:31.312399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.869 [2024-07-25 00:02:31.312638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.869 [2024-07-25 00:02:31.312881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.869 [2024-07-25 00:02:31.312904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.869 [2024-07-25 00:02:31.312920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.869 [2024-07-25 00:02:31.316510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.869 [2024-07-25 00:02:31.325847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.869 [2024-07-25 00:02:31.326263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.869 [2024-07-25 00:02:31.326295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.869 [2024-07-25 00:02:31.326313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.869 [2024-07-25 00:02:31.326552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.869 [2024-07-25 00:02:31.326795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.869 [2024-07-25 00:02:31.326819] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.869 [2024-07-25 00:02:31.326834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.869 [2024-07-25 00:02:31.330429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.869 [2024-07-25 00:02:31.339752] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.869 [2024-07-25 00:02:31.340151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.869 [2024-07-25 00:02:31.340183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.869 [2024-07-25 00:02:31.340200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.869 [2024-07-25 00:02:31.340448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.869 [2024-07-25 00:02:31.340693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.869 [2024-07-25 00:02:31.340716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.869 [2024-07-25 00:02:31.340731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.869 [2024-07-25 00:02:31.344321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.869 [2024-07-25 00:02:31.353634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.869 [2024-07-25 00:02:31.354035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.869 [2024-07-25 00:02:31.354067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.869 [2024-07-25 00:02:31.354084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.869 [2024-07-25 00:02:31.354343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.869 [2024-07-25 00:02:31.354589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.869 [2024-07-25 00:02:31.354613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.869 [2024-07-25 00:02:31.354628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.869 [2024-07-25 00:02:31.358212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.869 [2024-07-25 00:02:31.367530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.869 [2024-07-25 00:02:31.367950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.869 [2024-07-25 00:02:31.367981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.869 [2024-07-25 00:02:31.367999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.869 [2024-07-25 00:02:31.368258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.869 [2024-07-25 00:02:31.368502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.869 [2024-07-25 00:02:31.368526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.869 [2024-07-25 00:02:31.368541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.869 [2024-07-25 00:02:31.372125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.869 [2024-07-25 00:02:31.381444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.869 [2024-07-25 00:02:31.381879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.869 [2024-07-25 00:02:31.381910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.869 [2024-07-25 00:02:31.381927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.869 [2024-07-25 00:02:31.382166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.869 [2024-07-25 00:02:31.382418] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.869 [2024-07-25 00:02:31.382442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.869 [2024-07-25 00:02:31.382457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.869 [2024-07-25 00:02:31.386036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.869 [2024-07-25 00:02:31.395499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.870 [2024-07-25 00:02:31.395926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.870 [2024-07-25 00:02:31.395956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.870 [2024-07-25 00:02:31.395974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.870 [2024-07-25 00:02:31.396213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.870 [2024-07-25 00:02:31.396465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.870 [2024-07-25 00:02:31.396489] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.870 [2024-07-25 00:02:31.396505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.870 [2024-07-25 00:02:31.400092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.870 [2024-07-25 00:02:31.409410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.870 [2024-07-25 00:02:31.409837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.870 [2024-07-25 00:02:31.409868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.870 [2024-07-25 00:02:31.409885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.870 [2024-07-25 00:02:31.410125] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.870 [2024-07-25 00:02:31.410379] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.870 [2024-07-25 00:02:31.410403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.870 [2024-07-25 00:02:31.410424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.870 [2024-07-25 00:02:31.414009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.870 [2024-07-25 00:02:31.423327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.870 [2024-07-25 00:02:31.423729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.870 [2024-07-25 00:02:31.423759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.870 [2024-07-25 00:02:31.423777] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.870 [2024-07-25 00:02:31.424016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.870 [2024-07-25 00:02:31.424269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.870 [2024-07-25 00:02:31.424293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.870 [2024-07-25 00:02:31.424307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.870 [2024-07-25 00:02:31.427890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.870 [2024-07-25 00:02:31.437198] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.870 [2024-07-25 00:02:31.437616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.870 [2024-07-25 00:02:31.437647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.870 [2024-07-25 00:02:31.437665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.870 [2024-07-25 00:02:31.437904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.870 [2024-07-25 00:02:31.438157] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.870 [2024-07-25 00:02:31.438181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.870 [2024-07-25 00:02:31.438195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.870 [2024-07-25 00:02:31.441785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.870 [2024-07-25 00:02:31.451094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.870 [2024-07-25 00:02:31.451522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.870 [2024-07-25 00:02:31.451553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.870 [2024-07-25 00:02:31.451571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.870 [2024-07-25 00:02:31.451810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.870 [2024-07-25 00:02:31.452053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.870 [2024-07-25 00:02:31.452077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.870 [2024-07-25 00:02:31.452092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.870 [2024-07-25 00:02:31.455684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.870 [2024-07-25 00:02:31.464998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:00.870 [2024-07-25 00:02:31.465437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.870 [2024-07-25 00:02:31.465468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:00.870 [2024-07-25 00:02:31.465486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:00.870 [2024-07-25 00:02:31.465725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:00.870 [2024-07-25 00:02:31.465968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.870 [2024-07-25 00:02:31.465991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:00.870 [2024-07-25 00:02:31.466006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.870 [2024-07-25 00:02:31.469599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:00.870 [2024-07-25 00:02:31.478909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.129 [2024-07-25 00:02:31.479289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.129 [2024-07-25 00:02:31.479320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.129 [2024-07-25 00:02:31.479338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.129 [2024-07-25 00:02:31.479577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.129 [2024-07-25 00:02:31.479821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.129 [2024-07-25 00:02:31.479844] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.129 [2024-07-25 00:02:31.479859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.129 [2024-07-25 00:02:31.483453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.129 [2024-07-25 00:02:31.492768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.129 [2024-07-25 00:02:31.493187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.129 [2024-07-25 00:02:31.493218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.129 [2024-07-25 00:02:31.493236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.129 [2024-07-25 00:02:31.493485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.129 [2024-07-25 00:02:31.493729] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.129 [2024-07-25 00:02:31.493752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.129 [2024-07-25 00:02:31.493768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.129 [2024-07-25 00:02:31.497359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.129 [2024-07-25 00:02:31.506670] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.129 [2024-07-25 00:02:31.507085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.129 [2024-07-25 00:02:31.507116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.129 [2024-07-25 00:02:31.507133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.129 [2024-07-25 00:02:31.507382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.129 [2024-07-25 00:02:31.507631] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.129 [2024-07-25 00:02:31.507656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.129 [2024-07-25 00:02:31.507671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.129 [2024-07-25 00:02:31.511258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.129 [2024-07-25 00:02:31.520573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.129 [2024-07-25 00:02:31.520963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.130 [2024-07-25 00:02:31.520994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.130 [2024-07-25 00:02:31.521011] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.130 [2024-07-25 00:02:31.521258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.130 [2024-07-25 00:02:31.521502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.130 [2024-07-25 00:02:31.521525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.130 [2024-07-25 00:02:31.521541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.130 [2024-07-25 00:02:31.525126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.130 [2024-07-25 00:02:31.534446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.130 [2024-07-25 00:02:31.534846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.130 [2024-07-25 00:02:31.534877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.130 [2024-07-25 00:02:31.534895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.130 [2024-07-25 00:02:31.535134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.130 [2024-07-25 00:02:31.535387] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.130 [2024-07-25 00:02:31.535411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.130 [2024-07-25 00:02:31.535426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.130 [2024-07-25 00:02:31.539022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.130 [2024-07-25 00:02:31.548342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.130 [2024-07-25 00:02:31.548742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.130 [2024-07-25 00:02:31.548773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.130 [2024-07-25 00:02:31.548790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.130 [2024-07-25 00:02:31.549029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.130 [2024-07-25 00:02:31.549282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.130 [2024-07-25 00:02:31.549306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.130 [2024-07-25 00:02:31.549321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.130 [2024-07-25 00:02:31.552910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.130 [2024-07-25 00:02:31.562217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.130 [2024-07-25 00:02:31.562652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.130 [2024-07-25 00:02:31.562684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.130 [2024-07-25 00:02:31.562701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.130 [2024-07-25 00:02:31.562940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.130 [2024-07-25 00:02:31.563184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.130 [2024-07-25 00:02:31.563207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.130 [2024-07-25 00:02:31.563222] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.130 [2024-07-25 00:02:31.566814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.130 [2024-07-25 00:02:31.576121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.130 [2024-07-25 00:02:31.576553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.130 [2024-07-25 00:02:31.576584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.130 [2024-07-25 00:02:31.576601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.130 [2024-07-25 00:02:31.576840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.130 [2024-07-25 00:02:31.577084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.130 [2024-07-25 00:02:31.577107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.130 [2024-07-25 00:02:31.577122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.130 [2024-07-25 00:02:31.580716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.130 [2024-07-25 00:02:31.590038] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.130 [2024-07-25 00:02:31.590454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.130 [2024-07-25 00:02:31.590485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.130 [2024-07-25 00:02:31.590503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.130 [2024-07-25 00:02:31.590743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.130 [2024-07-25 00:02:31.590986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.130 [2024-07-25 00:02:31.591009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.130 [2024-07-25 00:02:31.591024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.130 [2024-07-25 00:02:31.594616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.130 [2024-07-25 00:02:31.603924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.130 [2024-07-25 00:02:31.604328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.130 [2024-07-25 00:02:31.604366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.130 [2024-07-25 00:02:31.604389] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.130 [2024-07-25 00:02:31.604630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.130 [2024-07-25 00:02:31.604873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.130 [2024-07-25 00:02:31.604896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.130 [2024-07-25 00:02:31.604912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.130 [2024-07-25 00:02:31.608504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.130 [2024-07-25 00:02:31.617809] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.130 [2024-07-25 00:02:31.618212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.130 [2024-07-25 00:02:31.618250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.130 [2024-07-25 00:02:31.618269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.130 [2024-07-25 00:02:31.618508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.130 [2024-07-25 00:02:31.618751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.130 [2024-07-25 00:02:31.618775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.130 [2024-07-25 00:02:31.618790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.130 [2024-07-25 00:02:31.622382] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.130 [2024-07-25 00:02:31.631693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.130 [2024-07-25 00:02:31.632083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.130 [2024-07-25 00:02:31.632113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.130 [2024-07-25 00:02:31.632131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.130 [2024-07-25 00:02:31.632380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.130 [2024-07-25 00:02:31.632625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.130 [2024-07-25 00:02:31.632648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.130 [2024-07-25 00:02:31.632663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.130 [2024-07-25 00:02:31.636252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.130 [2024-07-25 00:02:31.645594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.130 [2024-07-25 00:02:31.646007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.130 [2024-07-25 00:02:31.646039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.130 [2024-07-25 00:02:31.646056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.130 [2024-07-25 00:02:31.646308] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.130 [2024-07-25 00:02:31.646557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.130 [2024-07-25 00:02:31.646581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.130 [2024-07-25 00:02:31.646596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.130 [2024-07-25 00:02:31.650179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.130 [2024-07-25 00:02:31.659495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.130 [2024-07-25 00:02:31.659888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.130 [2024-07-25 00:02:31.659918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.131 [2024-07-25 00:02:31.659936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.131 [2024-07-25 00:02:31.660174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.131 [2024-07-25 00:02:31.660427] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.131 [2024-07-25 00:02:31.660451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.131 [2024-07-25 00:02:31.660466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.131 [2024-07-25 00:02:31.664048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.131 [2024-07-25 00:02:31.673364] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.131 [2024-07-25 00:02:31.673774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.131 [2024-07-25 00:02:31.673804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.131 [2024-07-25 00:02:31.673822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.131 [2024-07-25 00:02:31.674060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.131 [2024-07-25 00:02:31.674314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.131 [2024-07-25 00:02:31.674338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.131 [2024-07-25 00:02:31.674353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.131 [2024-07-25 00:02:31.677935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.131 [2024-07-25 00:02:31.687250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.131 [2024-07-25 00:02:31.687690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.131 [2024-07-25 00:02:31.687721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.131 [2024-07-25 00:02:31.687739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.131 [2024-07-25 00:02:31.687978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.131 [2024-07-25 00:02:31.688221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.131 [2024-07-25 00:02:31.688254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.131 [2024-07-25 00:02:31.688272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.131 [2024-07-25 00:02:31.691852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.131 [2024-07-25 00:02:31.701173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.131 [2024-07-25 00:02:31.701606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.131 [2024-07-25 00:02:31.701637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.131 [2024-07-25 00:02:31.701655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.131 [2024-07-25 00:02:31.701893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.131 [2024-07-25 00:02:31.702135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.131 [2024-07-25 00:02:31.702159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.131 [2024-07-25 00:02:31.702175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.131 [2024-07-25 00:02:31.705767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.131 [2024-07-25 00:02:31.715079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.131 [2024-07-25 00:02:31.715460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.131 [2024-07-25 00:02:31.715491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.131 [2024-07-25 00:02:31.715509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.131 [2024-07-25 00:02:31.715748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.131 [2024-07-25 00:02:31.715991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.131 [2024-07-25 00:02:31.716015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.131 [2024-07-25 00:02:31.716030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.131 [2024-07-25 00:02:31.719619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.131 [2024-07-25 00:02:31.729133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.131 [2024-07-25 00:02:31.729538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.131 [2024-07-25 00:02:31.729569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.131 [2024-07-25 00:02:31.729586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.131 [2024-07-25 00:02:31.729825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.131 [2024-07-25 00:02:31.730069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.131 [2024-07-25 00:02:31.730092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.131 [2024-07-25 00:02:31.730107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.131 [2024-07-25 00:02:31.733700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.390 [2024-07-25 00:02:31.743026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.390 [2024-07-25 00:02:31.743431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.390 [2024-07-25 00:02:31.743462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.390 [2024-07-25 00:02:31.743484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.390 [2024-07-25 00:02:31.743724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.390 [2024-07-25 00:02:31.743968] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.390 [2024-07-25 00:02:31.743991] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.390 [2024-07-25 00:02:31.744006] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.390 [2024-07-25 00:02:31.747600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.390 [2024-07-25 00:02:31.756917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.390 [2024-07-25 00:02:31.757325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.390 [2024-07-25 00:02:31.757356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.390 [2024-07-25 00:02:31.757374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.390 [2024-07-25 00:02:31.757613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.390 [2024-07-25 00:02:31.757856] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.390 [2024-07-25 00:02:31.757879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.390 [2024-07-25 00:02:31.757895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.390 [2024-07-25 00:02:31.761487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.390 [2024-07-25 00:02:31.770800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.390 [2024-07-25 00:02:31.771205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.390 [2024-07-25 00:02:31.771236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.390 [2024-07-25 00:02:31.771264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.390 [2024-07-25 00:02:31.771504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.391 [2024-07-25 00:02:31.771748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.391 [2024-07-25 00:02:31.771770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.391 [2024-07-25 00:02:31.771785] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.391 [2024-07-25 00:02:31.775387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.391 [2024-07-25 00:02:31.784712] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.391 [2024-07-25 00:02:31.785110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.391 [2024-07-25 00:02:31.785141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.391 [2024-07-25 00:02:31.785158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.391 [2024-07-25 00:02:31.785408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.391 [2024-07-25 00:02:31.785653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.391 [2024-07-25 00:02:31.785683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.391 [2024-07-25 00:02:31.785699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.391 [2024-07-25 00:02:31.789294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.391 [2024-07-25 00:02:31.798623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.391 [2024-07-25 00:02:31.799148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.391 [2024-07-25 00:02:31.799179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.391 [2024-07-25 00:02:31.799196] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.391 [2024-07-25 00:02:31.799448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.391 [2024-07-25 00:02:31.799693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.391 [2024-07-25 00:02:31.799716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.391 [2024-07-25 00:02:31.799731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.391 [2024-07-25 00:02:31.803321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.391 [2024-07-25 00:02:31.812636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.391 [2024-07-25 00:02:31.813098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.391 [2024-07-25 00:02:31.813148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.391 [2024-07-25 00:02:31.813182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.391 [2024-07-25 00:02:31.813433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.391 [2024-07-25 00:02:31.813677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.391 [2024-07-25 00:02:31.813700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.391 [2024-07-25 00:02:31.813715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.391 [2024-07-25 00:02:31.817305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.391 [2024-07-25 00:02:31.826619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.391 [2024-07-25 00:02:31.826997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.391 [2024-07-25 00:02:31.827029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.391 [2024-07-25 00:02:31.827047] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.391 [2024-07-25 00:02:31.827299] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.391 [2024-07-25 00:02:31.827543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.391 [2024-07-25 00:02:31.827567] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.391 [2024-07-25 00:02:31.827583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.391 [2024-07-25 00:02:31.831166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.391 [2024-07-25 00:02:31.840513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.391 [2024-07-25 00:02:31.840932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.391 [2024-07-25 00:02:31.840962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.391 [2024-07-25 00:02:31.840980] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.391 [2024-07-25 00:02:31.841219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.391 [2024-07-25 00:02:31.841473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.391 [2024-07-25 00:02:31.841497] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.391 [2024-07-25 00:02:31.841513] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.391 [2024-07-25 00:02:31.845106] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.391 [2024-07-25 00:02:31.854447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.391 [2024-07-25 00:02:31.854986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.391 [2024-07-25 00:02:31.855053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.391 [2024-07-25 00:02:31.855071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.391 [2024-07-25 00:02:31.855324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.391 [2024-07-25 00:02:31.855568] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.391 [2024-07-25 00:02:31.855592] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.391 [2024-07-25 00:02:31.855607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.391 [2024-07-25 00:02:31.859195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.391 [2024-07-25 00:02:31.868331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.391 [2024-07-25 00:02:31.868750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.391 [2024-07-25 00:02:31.868785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.391 [2024-07-25 00:02:31.868802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.391 [2024-07-25 00:02:31.869042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.391 [2024-07-25 00:02:31.869299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.391 [2024-07-25 00:02:31.869323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.391 [2024-07-25 00:02:31.869338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.391 [2024-07-25 00:02:31.872919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.391 [2024-07-25 00:02:31.882278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.391 [2024-07-25 00:02:31.882706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.391 [2024-07-25 00:02:31.882740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.391 [2024-07-25 00:02:31.882757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.391 [2024-07-25 00:02:31.883002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.391 [2024-07-25 00:02:31.883258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.391 [2024-07-25 00:02:31.883283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.391 [2024-07-25 00:02:31.883299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.391 [2024-07-25 00:02:31.886886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.391 [2024-07-25 00:02:31.896200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.391 [2024-07-25 00:02:31.896605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.391 [2024-07-25 00:02:31.896635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.391 [2024-07-25 00:02:31.896653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.391 [2024-07-25 00:02:31.896892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.391 [2024-07-25 00:02:31.897135] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.391 [2024-07-25 00:02:31.897159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.391 [2024-07-25 00:02:31.897174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.391 [2024-07-25 00:02:31.900771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.391 [2024-07-25 00:02:31.910095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.391 [2024-07-25 00:02:31.910483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.391 [2024-07-25 00:02:31.910514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.391 [2024-07-25 00:02:31.910532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.391 [2024-07-25 00:02:31.910770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.391 [2024-07-25 00:02:31.911013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.392 [2024-07-25 00:02:31.911036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.392 [2024-07-25 00:02:31.911051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.392 [2024-07-25 00:02:31.914645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.392 [2024-07-25 00:02:31.923967] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.392 [2024-07-25 00:02:31.924354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.392 [2024-07-25 00:02:31.924385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.392 [2024-07-25 00:02:31.924403] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.392 [2024-07-25 00:02:31.924642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.392 [2024-07-25 00:02:31.924885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.392 [2024-07-25 00:02:31.924908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.392 [2024-07-25 00:02:31.924929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.392 [2024-07-25 00:02:31.928524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.392 [2024-07-25 00:02:31.937845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.392 [2024-07-25 00:02:31.938238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.392 [2024-07-25 00:02:31.938278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.392 [2024-07-25 00:02:31.938295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.392 [2024-07-25 00:02:31.938534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.392 [2024-07-25 00:02:31.938777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.392 [2024-07-25 00:02:31.938801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.392 [2024-07-25 00:02:31.938816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.392 [2024-07-25 00:02:31.942427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.392 [2024-07-25 00:02:31.951759] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.392 [2024-07-25 00:02:31.952178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.392 [2024-07-25 00:02:31.952209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.392 [2024-07-25 00:02:31.952227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.392 [2024-07-25 00:02:31.952474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.392 [2024-07-25 00:02:31.952718] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.392 [2024-07-25 00:02:31.952742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.392 [2024-07-25 00:02:31.952757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.392 [2024-07-25 00:02:31.956361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.392 [2024-07-25 00:02:31.965723] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.392 [2024-07-25 00:02:31.966144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.392 [2024-07-25 00:02:31.966175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.392 [2024-07-25 00:02:31.966192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.392 [2024-07-25 00:02:31.966440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.392 [2024-07-25 00:02:31.966685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.392 [2024-07-25 00:02:31.966708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.392 [2024-07-25 00:02:31.966724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.392 [2024-07-25 00:02:31.970326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.392 [2024-07-25 00:02:31.979665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.392 [2024-07-25 00:02:31.980058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.392 [2024-07-25 00:02:31.980094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.392 [2024-07-25 00:02:31.980112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.392 [2024-07-25 00:02:31.980363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.392 [2024-07-25 00:02:31.980607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.392 [2024-07-25 00:02:31.980630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.392 [2024-07-25 00:02:31.980645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.392 [2024-07-25 00:02:31.984232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.392 [2024-07-25 00:02:31.993583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.392 [2024-07-25 00:02:31.994064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.392 [2024-07-25 00:02:31.994113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.392 [2024-07-25 00:02:31.994131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.392 [2024-07-25 00:02:31.994381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.392 [2024-07-25 00:02:31.994625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.392 [2024-07-25 00:02:31.994648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.392 [2024-07-25 00:02:31.994663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.392 [2024-07-25 00:02:31.998264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.651 [2024-07-25 00:02:32.007606] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.651 [2024-07-25 00:02:32.008080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-25 00:02:32.008110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.651 [2024-07-25 00:02:32.008128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.651 [2024-07-25 00:02:32.008376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.651 [2024-07-25 00:02:32.008620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.651 [2024-07-25 00:02:32.008644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.651 [2024-07-25 00:02:32.008658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.651 [2024-07-25 00:02:32.012257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.651 [2024-07-25 00:02:32.021591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.651 [2024-07-25 00:02:32.022093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-25 00:02:32.022144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.651 [2024-07-25 00:02:32.022162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.651 [2024-07-25 00:02:32.022413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.651 [2024-07-25 00:02:32.022668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.651 [2024-07-25 00:02:32.022692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.651 [2024-07-25 00:02:32.022708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.651 [2024-07-25 00:02:32.026305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.651 [2024-07-25 00:02:32.035632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.651 [2024-07-25 00:02:32.036119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-25 00:02:32.036149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.651 [2024-07-25 00:02:32.036167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.651 [2024-07-25 00:02:32.036415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.651 [2024-07-25 00:02:32.036659] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.651 [2024-07-25 00:02:32.036682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.651 [2024-07-25 00:02:32.036697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.651 [2024-07-25 00:02:32.040293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.651 [2024-07-25 00:02:32.049636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.651 [2024-07-25 00:02:32.050180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.651 [2024-07-25 00:02:32.050235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.651 [2024-07-25 00:02:32.050264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.651 [2024-07-25 00:02:32.050504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.651 [2024-07-25 00:02:32.050747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.651 [2024-07-25 00:02:32.050770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.652 [2024-07-25 00:02:32.050786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.652 [2024-07-25 00:02:32.054380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.652 [2024-07-25 00:02:32.063495] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.652 [2024-07-25 00:02:32.064033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-25 00:02:32.064089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.652 [2024-07-25 00:02:32.064107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.652 [2024-07-25 00:02:32.064358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.652 [2024-07-25 00:02:32.064602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.652 [2024-07-25 00:02:32.064625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.652 [2024-07-25 00:02:32.064640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.652 [2024-07-25 00:02:32.068234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.652 [2024-07-25 00:02:32.077355] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.652 [2024-07-25 00:02:32.077778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-25 00:02:32.077809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.652 [2024-07-25 00:02:32.077827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.652 [2024-07-25 00:02:32.078066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.652 [2024-07-25 00:02:32.078321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.652 [2024-07-25 00:02:32.078345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.652 [2024-07-25 00:02:32.078361] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.652 [2024-07-25 00:02:32.081946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.652 [2024-07-25 00:02:32.091267] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.652 [2024-07-25 00:02:32.091690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-25 00:02:32.091721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.652 [2024-07-25 00:02:32.091739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.652 [2024-07-25 00:02:32.091977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.652 [2024-07-25 00:02:32.092220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.652 [2024-07-25 00:02:32.092254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.652 [2024-07-25 00:02:32.092272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.652 [2024-07-25 00:02:32.095860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.652 [2024-07-25 00:02:32.105184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.652 [2024-07-25 00:02:32.105624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-25 00:02:32.105655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.652 [2024-07-25 00:02:32.105673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.652 [2024-07-25 00:02:32.105911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.652 [2024-07-25 00:02:32.106154] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.652 [2024-07-25 00:02:32.106177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.652 [2024-07-25 00:02:32.106193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.652 [2024-07-25 00:02:32.109790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.652 [2024-07-25 00:02:32.119105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.652 [2024-07-25 00:02:32.119536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-25 00:02:32.119567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.652 [2024-07-25 00:02:32.119591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.652 [2024-07-25 00:02:32.119830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.652 [2024-07-25 00:02:32.120074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.652 [2024-07-25 00:02:32.120097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.652 [2024-07-25 00:02:32.120112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.652 [2024-07-25 00:02:32.123707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.652 [2024-07-25 00:02:32.133027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.652 [2024-07-25 00:02:32.133432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-25 00:02:32.133464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.652 [2024-07-25 00:02:32.133481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.652 [2024-07-25 00:02:32.133720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.652 [2024-07-25 00:02:32.133963] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.652 [2024-07-25 00:02:32.133987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.652 [2024-07-25 00:02:32.134002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.652 [2024-07-25 00:02:32.137599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.652 [2024-07-25 00:02:32.146923] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.652 [2024-07-25 00:02:32.147350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-25 00:02:32.147381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.652 [2024-07-25 00:02:32.147399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.652 [2024-07-25 00:02:32.147638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.652 [2024-07-25 00:02:32.147881] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.652 [2024-07-25 00:02:32.147904] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.652 [2024-07-25 00:02:32.147919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.652 [2024-07-25 00:02:32.151515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.652 [2024-07-25 00:02:32.160863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.652 [2024-07-25 00:02:32.161288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-25 00:02:32.161319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.652 [2024-07-25 00:02:32.161337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.652 [2024-07-25 00:02:32.161576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.652 [2024-07-25 00:02:32.161819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.652 [2024-07-25 00:02:32.161848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.652 [2024-07-25 00:02:32.161864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.652 [2024-07-25 00:02:32.165460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.652 [2024-07-25 00:02:32.174774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.652 [2024-07-25 00:02:32.175169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-25 00:02:32.175199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.652 [2024-07-25 00:02:32.175216] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.652 [2024-07-25 00:02:32.175465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.652 [2024-07-25 00:02:32.175709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.652 [2024-07-25 00:02:32.175733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.652 [2024-07-25 00:02:32.175748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.652 [2024-07-25 00:02:32.179337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.652 [2024-07-25 00:02:32.188649] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.652 [2024-07-25 00:02:32.189067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.652 [2024-07-25 00:02:32.189097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.652 [2024-07-25 00:02:32.189115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.652 [2024-07-25 00:02:32.189364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.652 [2024-07-25 00:02:32.189607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.652 [2024-07-25 00:02:32.189631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.652 [2024-07-25 00:02:32.189646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.653 [2024-07-25 00:02:32.193231] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.653 [2024-07-25 00:02:32.202553] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.653 [2024-07-25 00:02:32.202970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-25 00:02:32.203001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.653 [2024-07-25 00:02:32.203018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.653 [2024-07-25 00:02:32.203268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.653 [2024-07-25 00:02:32.203511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.653 [2024-07-25 00:02:32.203535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.653 [2024-07-25 00:02:32.203550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.653 [2024-07-25 00:02:32.207137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.653 [2024-07-25 00:02:32.216478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.653 [2024-07-25 00:02:32.216875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-25 00:02:32.216906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.653 [2024-07-25 00:02:32.216924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.653 [2024-07-25 00:02:32.217163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.653 [2024-07-25 00:02:32.217417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.653 [2024-07-25 00:02:32.217441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.653 [2024-07-25 00:02:32.217456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.653 [2024-07-25 00:02:32.221039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.653 [2024-07-25 00:02:32.230363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.653 [2024-07-25 00:02:32.230781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-25 00:02:32.230812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.653 [2024-07-25 00:02:32.230830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.653 [2024-07-25 00:02:32.231069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.653 [2024-07-25 00:02:32.231324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.653 [2024-07-25 00:02:32.231348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.653 [2024-07-25 00:02:32.231363] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.653 [2024-07-25 00:02:32.234944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.653 [2024-07-25 00:02:32.244279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.653 [2024-07-25 00:02:32.244706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-25 00:02:32.244737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.653 [2024-07-25 00:02:32.244755] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.653 [2024-07-25 00:02:32.244994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.653 [2024-07-25 00:02:32.245237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.653 [2024-07-25 00:02:32.245272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.653 [2024-07-25 00:02:32.245287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.653 [2024-07-25 00:02:32.248873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3477565 Killed "${NVMF_APP[@]}" "$@" 00:25:01.653 00:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:01.653 00:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:01.653 00:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:01.653 00:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:01.653 00:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:01.653 00:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=3478519 00:25:01.653 00:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:01.653 00:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 3478519 00:25:01.653 00:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 3478519 ']' 00:25:01.653 00:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.653 00:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:01.653 00:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.653 00:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:01.653 00:02:32 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:01.653 [2024-07-25 00:02:32.258893] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.653 [2024-07-25 00:02:32.259316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.653 [2024-07-25 00:02:32.259351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.653 [2024-07-25 00:02:32.259370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.653 [2024-07-25 00:02:32.259611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.653 [2024-07-25 00:02:32.259855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.653 [2024-07-25 00:02:32.259879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.653 [2024-07-25 00:02:32.259895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.913 [2024-07-25 00:02:32.263514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.913 [2024-07-25 00:02:32.272878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.913 [2024-07-25 00:02:32.273303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.913 [2024-07-25 00:02:32.273336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.913 [2024-07-25 00:02:32.273354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.913 [2024-07-25 00:02:32.273594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.913 [2024-07-25 00:02:32.273837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.913 [2024-07-25 00:02:32.273861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.913 [2024-07-25 00:02:32.273876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.913 [2024-07-25 00:02:32.277487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.913 [2024-07-25 00:02:32.286849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.913 [2024-07-25 00:02:32.287257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.913 [2024-07-25 00:02:32.287289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.913 [2024-07-25 00:02:32.287313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.913 [2024-07-25 00:02:32.287553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.913 [2024-07-25 00:02:32.287797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.913 [2024-07-25 00:02:32.287820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.913 [2024-07-25 00:02:32.287837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.913 [2024-07-25 00:02:32.291437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.913 [2024-07-25 00:02:32.300764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.913 [2024-07-25 00:02:32.301196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.913 [2024-07-25 00:02:32.301228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.913 [2024-07-25 00:02:32.301254] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.913 [2024-07-25 00:02:32.301497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.913 [2024-07-25 00:02:32.301740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.913 [2024-07-25 00:02:32.301764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.913 [2024-07-25 00:02:32.301780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.913 [2024-07-25 00:02:32.305372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.913 [2024-07-25 00:02:32.305811] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:25:01.913 [2024-07-25 00:02:32.305881] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.913 [2024-07-25 00:02:32.314853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.913 [2024-07-25 00:02:32.315261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.913 [2024-07-25 00:02:32.315294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.913 [2024-07-25 00:02:32.315312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.913 [2024-07-25 00:02:32.315552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.913 [2024-07-25 00:02:32.315796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.913 [2024-07-25 00:02:32.315820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.913 [2024-07-25 00:02:32.315835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.913 [2024-07-25 00:02:32.319436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.913 [2024-07-25 00:02:32.328757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.913 [2024-07-25 00:02:32.329163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.913 [2024-07-25 00:02:32.329194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.913 [2024-07-25 00:02:32.329212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.913 [2024-07-25 00:02:32.329487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.913 [2024-07-25 00:02:32.329731] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.913 [2024-07-25 00:02:32.329755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.913 [2024-07-25 00:02:32.329770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.913 [2024-07-25 00:02:32.333364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.913 [2024-07-25 00:02:32.342711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.913 [2024-07-25 00:02:32.343143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.913 [2024-07-25 00:02:32.343174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.913 [2024-07-25 00:02:32.343193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.913 [2024-07-25 00:02:32.343443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.913 [2024-07-25 00:02:32.343687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.913 [2024-07-25 00:02:32.343710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.913 [2024-07-25 00:02:32.343725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.913 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.913 [2024-07-25 00:02:32.347317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.913 [2024-07-25 00:02:32.356697] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.913 [2024-07-25 00:02:32.357099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.913 [2024-07-25 00:02:32.357131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.913 [2024-07-25 00:02:32.357148] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.913 [2024-07-25 00:02:32.357398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.913 [2024-07-25 00:02:32.357642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.913 [2024-07-25 00:02:32.357665] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.913 [2024-07-25 00:02:32.357680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.913 [2024-07-25 00:02:32.361277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.913 [2024-07-25 00:02:32.370589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.914 [2024-07-25 00:02:32.371014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.914 [2024-07-25 00:02:32.371046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.914 [2024-07-25 00:02:32.371065] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.914 [2024-07-25 00:02:32.371314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.914 [2024-07-25 00:02:32.371558] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.914 [2024-07-25 00:02:32.371587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.914 [2024-07-25 00:02:32.371603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.914 [2024-07-25 00:02:32.375189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.914 [2024-07-25 00:02:32.382299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:01.914 [2024-07-25 00:02:32.384522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.914 [2024-07-25 00:02:32.384931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.914 [2024-07-25 00:02:32.384963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.914 [2024-07-25 00:02:32.384982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.914 [2024-07-25 00:02:32.385224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.914 [2024-07-25 00:02:32.385489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.914 [2024-07-25 00:02:32.385513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.914 [2024-07-25 00:02:32.385531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.914 [2024-07-25 00:02:32.389151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.914 [2024-07-25 00:02:32.398504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.914 [2024-07-25 00:02:32.399109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.914 [2024-07-25 00:02:32.399150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.914 [2024-07-25 00:02:32.399172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.914 [2024-07-25 00:02:32.399439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.914 [2024-07-25 00:02:32.399689] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.914 [2024-07-25 00:02:32.399713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.914 [2024-07-25 00:02:32.399731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.914 [2024-07-25 00:02:32.403324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.914 [2024-07-25 00:02:32.412464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.914 [2024-07-25 00:02:32.412910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.914 [2024-07-25 00:02:32.412941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.914 [2024-07-25 00:02:32.412959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.914 [2024-07-25 00:02:32.413198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.914 [2024-07-25 00:02:32.413452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.914 [2024-07-25 00:02:32.413477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.914 [2024-07-25 00:02:32.413493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.914 [2024-07-25 00:02:32.417076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.914 [2024-07-25 00:02:32.426411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.914 [2024-07-25 00:02:32.426812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.914 [2024-07-25 00:02:32.426844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.914 [2024-07-25 00:02:32.426862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.914 [2024-07-25 00:02:32.427101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.914 [2024-07-25 00:02:32.427357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.914 [2024-07-25 00:02:32.427382] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.914 [2024-07-25 00:02:32.427398] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.914 [2024-07-25 00:02:32.430984] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.914 [2024-07-25 00:02:32.440329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.914 [2024-07-25 00:02:32.440774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.914 [2024-07-25 00:02:32.440806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.914 [2024-07-25 00:02:32.440824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.914 [2024-07-25 00:02:32.441063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.914 [2024-07-25 00:02:32.441319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.914 [2024-07-25 00:02:32.441344] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.914 [2024-07-25 00:02:32.441360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.914 [2024-07-25 00:02:32.444967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.914 [2024-07-25 00:02:32.454329] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.914 [2024-07-25 00:02:32.454942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.914 [2024-07-25 00:02:32.454984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.914 [2024-07-25 00:02:32.455006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.914 [2024-07-25 00:02:32.455267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.914 [2024-07-25 00:02:32.455515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.914 [2024-07-25 00:02:32.455550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.914 [2024-07-25 00:02:32.455568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.914 [2024-07-25 00:02:32.459166] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.914 [2024-07-25 00:02:32.468289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.914 [2024-07-25 00:02:32.468741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.914 [2024-07-25 00:02:32.468772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.914 [2024-07-25 00:02:32.468802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.914 [2024-07-25 00:02:32.469043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.914 [2024-07-25 00:02:32.469297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.914 [2024-07-25 00:02:32.469322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.914 [2024-07-25 00:02:32.469338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.914 [2024-07-25 00:02:32.472922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.914 [2024-07-25 00:02:32.482255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.914 [2024-07-25 00:02:32.482668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.914 [2024-07-25 00:02:32.482699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.914 [2024-07-25 00:02:32.482718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.914 [2024-07-25 00:02:32.482959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.914 [2024-07-25 00:02:32.483203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.914 [2024-07-25 00:02:32.483227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.914 [2024-07-25 00:02:32.483252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.914 [2024-07-25 00:02:32.486843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.914 [2024-07-25 00:02:32.496166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.914 [2024-07-25 00:02:32.496580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.914 [2024-07-25 00:02:32.496612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.914 [2024-07-25 00:02:32.496629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.914 [2024-07-25 00:02:32.496869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.914 [2024-07-25 00:02:32.497112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.914 [2024-07-25 00:02:32.497135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.914 [2024-07-25 00:02:32.497151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.914 [2024-07-25 00:02:32.500756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:01.914 [2024-07-25 00:02:32.503599] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.915 [2024-07-25 00:02:32.503634] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.915 [2024-07-25 00:02:32.503650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.915 [2024-07-25 00:02:32.503664] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.915 [2024-07-25 00:02:32.503676] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.915 [2024-07-25 00:02:32.503766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.915 [2024-07-25 00:02:32.503823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:01.915 [2024-07-25 00:02:32.503828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:01.915 [2024-07-25 00:02:32.510058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:01.915 [2024-07-25 00:02:32.510572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.915 [2024-07-25 00:02:32.510610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:01.915 [2024-07-25 00:02:32.510630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:01.915 [2024-07-25 00:02:32.510877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:01.915 [2024-07-25 00:02:32.511124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:01.915 [2024-07-25 00:02:32.511148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:01.915 [2024-07-25 00:02:32.511166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:01.915 [2024-07-25 00:02:32.514789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.173 [2024-07-25 00:02:32.524177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.173 [2024-07-25 00:02:32.524735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-07-25 00:02:32.524779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.173 [2024-07-25 00:02:32.524801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.173 [2024-07-25 00:02:32.525051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.173 [2024-07-25 00:02:32.525312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.173 [2024-07-25 00:02:32.525337] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.173 [2024-07-25 00:02:32.525355] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.173 [2024-07-25 00:02:32.528959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.173 [2024-07-25 00:02:32.538110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.173 [2024-07-25 00:02:32.538661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.173 [2024-07-25 00:02:32.538707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.173 [2024-07-25 00:02:32.538728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.173 [2024-07-25 00:02:32.538977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.173 [2024-07-25 00:02:32.539224] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.173 [2024-07-25 00:02:32.539258] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.173 [2024-07-25 00:02:32.539278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.173 [2024-07-25 00:02:32.542895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.173 [2024-07-25 00:02:32.552021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.174 [2024-07-25 00:02:32.552634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-07-25 00:02:32.552681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.174 [2024-07-25 00:02:32.552715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.174 [2024-07-25 00:02:32.552966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.174 [2024-07-25 00:02:32.553214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.174 [2024-07-25 00:02:32.553238] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.174 [2024-07-25 00:02:32.553265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.174 [2024-07-25 00:02:32.556854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.174 [2024-07-25 00:02:32.565980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.174 [2024-07-25 00:02:32.566479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-07-25 00:02:32.566516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.174 [2024-07-25 00:02:32.566537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.174 [2024-07-25 00:02:32.566782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.174 [2024-07-25 00:02:32.567028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.174 [2024-07-25 00:02:32.567052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.174 [2024-07-25 00:02:32.567070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.174 [2024-07-25 00:02:32.570663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.174 [2024-07-25 00:02:32.579998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.174 [2024-07-25 00:02:32.580630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-07-25 00:02:32.580676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.174 [2024-07-25 00:02:32.580698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.174 [2024-07-25 00:02:32.580947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.174 [2024-07-25 00:02:32.581195] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.174 [2024-07-25 00:02:32.581219] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.174 [2024-07-25 00:02:32.581237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.174 [2024-07-25 00:02:32.584839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.174 [2024-07-25 00:02:32.593965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.174 [2024-07-25 00:02:32.594546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-07-25 00:02:32.594587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.174 [2024-07-25 00:02:32.594607] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.174 [2024-07-25 00:02:32.594854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.174 [2024-07-25 00:02:32.595101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.174 [2024-07-25 00:02:32.595137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.174 [2024-07-25 00:02:32.595155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.174 [2024-07-25 00:02:32.598751] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.174 [2024-07-25 00:02:32.607867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.174 [2024-07-25 00:02:32.608272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-07-25 00:02:32.608303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.174 [2024-07-25 00:02:32.608322] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.174 [2024-07-25 00:02:32.608562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.174 [2024-07-25 00:02:32.608806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.174 [2024-07-25 00:02:32.608830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.174 [2024-07-25 00:02:32.608846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.174 [2024-07-25 00:02:32.612438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.174 [2024-07-25 00:02:32.621749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.174 [2024-07-25 00:02:32.622157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-07-25 00:02:32.622188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.174 [2024-07-25 00:02:32.622205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.174 [2024-07-25 00:02:32.622453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.174 [2024-07-25 00:02:32.622697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.174 [2024-07-25 00:02:32.622720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.174 [2024-07-25 00:02:32.622736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.174 [2024-07-25 00:02:32.626328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.174 [2024-07-25 00:02:32.635644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.174 [2024-07-25 00:02:32.636024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-07-25 00:02:32.636054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.174 [2024-07-25 00:02:32.636072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.174 [2024-07-25 00:02:32.636320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.174 [2024-07-25 00:02:32.636565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.174 [2024-07-25 00:02:32.636588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.174 [2024-07-25 00:02:32.636604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.174 [2024-07-25 00:02:32.640186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.174 [2024-07-25 00:02:32.649524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.174 [2024-07-25 00:02:32.649943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-07-25 00:02:32.649974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.174 [2024-07-25 00:02:32.649991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.174 [2024-07-25 00:02:32.650230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.174 [2024-07-25 00:02:32.650483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.174 [2024-07-25 00:02:32.650506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.174 [2024-07-25 00:02:32.650522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.174 [2024-07-25 00:02:32.654107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.174 [2024-07-25 00:02:32.663421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.174 [2024-07-25 00:02:32.663853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-07-25 00:02:32.663884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.174 [2024-07-25 00:02:32.663901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.174 [2024-07-25 00:02:32.664140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.174 [2024-07-25 00:02:32.664392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.174 [2024-07-25 00:02:32.664416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.174 [2024-07-25 00:02:32.664432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.174 [2024-07-25 00:02:32.668015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.174 [2024-07-25 00:02:32.677327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.174 [2024-07-25 00:02:32.677722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.174 [2024-07-25 00:02:32.677753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.174 [2024-07-25 00:02:32.677770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.174 [2024-07-25 00:02:32.678008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.174 [2024-07-25 00:02:32.678261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.174 [2024-07-25 00:02:32.678285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.174 [2024-07-25 00:02:32.678301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.174 [2024-07-25 00:02:32.681884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.174 [2024-07-25 00:02:32.691195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.175 [2024-07-25 00:02:32.691607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-07-25 00:02:32.691638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.175 [2024-07-25 00:02:32.691655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.175 [2024-07-25 00:02:32.691900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.175 [2024-07-25 00:02:32.692144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.175 [2024-07-25 00:02:32.692167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.175 [2024-07-25 00:02:32.692183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.175 [2024-07-25 00:02:32.695774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.175 [2024-07-25 00:02:32.705086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.175 [2024-07-25 00:02:32.705499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-07-25 00:02:32.705530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.175 [2024-07-25 00:02:32.705548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.175 [2024-07-25 00:02:32.705787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.175 [2024-07-25 00:02:32.706031] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.175 [2024-07-25 00:02:32.706054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.175 [2024-07-25 00:02:32.706069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.175 [2024-07-25 00:02:32.709659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.175 [2024-07-25 00:02:32.718968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.175 [2024-07-25 00:02:32.719370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-07-25 00:02:32.719401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.175 [2024-07-25 00:02:32.719419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.175 [2024-07-25 00:02:32.719658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.175 [2024-07-25 00:02:32.719901] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.175 [2024-07-25 00:02:32.719924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.175 [2024-07-25 00:02:32.719940] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.175 [2024-07-25 00:02:32.723533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.175 [2024-07-25 00:02:32.732840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.175 [2024-07-25 00:02:32.733237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-07-25 00:02:32.733274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.175 [2024-07-25 00:02:32.733292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.175 [2024-07-25 00:02:32.733531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.175 [2024-07-25 00:02:32.733774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.175 [2024-07-25 00:02:32.733797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.175 [2024-07-25 00:02:32.733818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.175 [2024-07-25 00:02:32.737408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.175 [2024-07-25 00:02:32.746726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.175 [2024-07-25 00:02:32.747170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-07-25 00:02:32.747201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.175 [2024-07-25 00:02:32.747219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.175 [2024-07-25 00:02:32.747465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.175 [2024-07-25 00:02:32.747709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.175 [2024-07-25 00:02:32.747733] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.175 [2024-07-25 00:02:32.747748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.175 [2024-07-25 00:02:32.751338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.175 [2024-07-25 00:02:32.760656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.175 [2024-07-25 00:02:32.761023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-07-25 00:02:32.761054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.175 [2024-07-25 00:02:32.761071] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.175 [2024-07-25 00:02:32.761320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.175 [2024-07-25 00:02:32.761563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.175 [2024-07-25 00:02:32.761586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.175 [2024-07-25 00:02:32.761602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.175 [2024-07-25 00:02:32.765190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.175 [2024-07-25 00:02:32.774416] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.175 [2024-07-25 00:02:32.774791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.175 [2024-07-25 00:02:32.774818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.175 [2024-07-25 00:02:32.774834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.175 [2024-07-25 00:02:32.775064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.175 [2024-07-25 00:02:32.775303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.175 [2024-07-25 00:02:32.775325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.175 [2024-07-25 00:02:32.775338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.175 [2024-07-25 00:02:32.778675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.434 [2024-07-25 00:02:32.788048] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.434 [2024-07-25 00:02:32.788420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.434 [2024-07-25 00:02:32.788453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.434 [2024-07-25 00:02:32.788470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.434 [2024-07-25 00:02:32.788700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.434 [2024-07-25 00:02:32.788929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.434 [2024-07-25 00:02:32.788950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.434 [2024-07-25 00:02:32.788964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.434 [2024-07-25 00:02:32.792153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.434 [2024-07-25 00:02:32.801600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.434 [2024-07-25 00:02:32.801988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.434 [2024-07-25 00:02:32.802016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.434 [2024-07-25 00:02:32.802032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.434 [2024-07-25 00:02:32.802270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.434 [2024-07-25 00:02:32.802483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.434 [2024-07-25 00:02:32.802504] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.434 [2024-07-25 00:02:32.802518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.434 [2024-07-25 00:02:32.805730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.434 [2024-07-25 00:02:32.815135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.434 [2024-07-25 00:02:32.815528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.434 [2024-07-25 00:02:32.815556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.434 [2024-07-25 00:02:32.815571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.434 [2024-07-25 00:02:32.815786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.434 [2024-07-25 00:02:32.816013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.434 [2024-07-25 00:02:32.816033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.434 [2024-07-25 00:02:32.816047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.434 [2024-07-25 00:02:32.819196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.434 [2024-07-25 00:02:32.828678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.434 [2024-07-25 00:02:32.829067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.434 [2024-07-25 00:02:32.829095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.434 [2024-07-25 00:02:32.829111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.434 [2024-07-25 00:02:32.829335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.434 [2024-07-25 00:02:32.829577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.434 [2024-07-25 00:02:32.829598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.434 [2024-07-25 00:02:32.829611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.434 [2024-07-25 00:02:32.832783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.434 [2024-07-25 00:02:32.842292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.434 [2024-07-25 00:02:32.842652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.435 [2024-07-25 00:02:32.842679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.435 [2024-07-25 00:02:32.842695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.435 [2024-07-25 00:02:32.842910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.435 [2024-07-25 00:02:32.843137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.435 [2024-07-25 00:02:32.843158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.435 [2024-07-25 00:02:32.843171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.435 [2024-07-25 00:02:32.846420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.435 [2024-07-25 00:02:32.855881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.435 [2024-07-25 00:02:32.856273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.435 [2024-07-25 00:02:32.856302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.435 [2024-07-25 00:02:32.856319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.435 [2024-07-25 00:02:32.856534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.435 [2024-07-25 00:02:32.856761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.435 [2024-07-25 00:02:32.856782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.435 [2024-07-25 00:02:32.856795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.435 [2024-07-25 00:02:32.860007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.435 [2024-07-25 00:02:32.869463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.435 [2024-07-25 00:02:32.869869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.435 [2024-07-25 00:02:32.869897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.435 [2024-07-25 00:02:32.869913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.435 [2024-07-25 00:02:32.870128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.435 [2024-07-25 00:02:32.870386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.435 [2024-07-25 00:02:32.870408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.435 [2024-07-25 00:02:32.870422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.435 [2024-07-25 00:02:32.873655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.435 [2024-07-25 00:02:32.882943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.435 [2024-07-25 00:02:32.883326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.435 [2024-07-25 00:02:32.883354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.435 [2024-07-25 00:02:32.883370] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.435 [2024-07-25 00:02:32.883585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.435 [2024-07-25 00:02:32.883813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.435 [2024-07-25 00:02:32.883833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.435 [2024-07-25 00:02:32.883847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.435 [2024-07-25 00:02:32.887063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.435 [2024-07-25 00:02:32.896457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.435 [2024-07-25 00:02:32.896848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.435 [2024-07-25 00:02:32.896876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.435 [2024-07-25 00:02:32.896892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.435 [2024-07-25 00:02:32.897107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.435 [2024-07-25 00:02:32.897363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.435 [2024-07-25 00:02:32.897385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.435 [2024-07-25 00:02:32.897399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.435 [2024-07-25 00:02:32.900597] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.435 [2024-07-25 00:02:32.910030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.435 [2024-07-25 00:02:32.910444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.435 [2024-07-25 00:02:32.910472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.435 [2024-07-25 00:02:32.910488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.435 [2024-07-25 00:02:32.910704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.435 [2024-07-25 00:02:32.910931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.435 [2024-07-25 00:02:32.910952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.435 [2024-07-25 00:02:32.910965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.435 [2024-07-25 00:02:32.914129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.435 [2024-07-25 00:02:32.923587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.435 [2024-07-25 00:02:32.923941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.435 [2024-07-25 00:02:32.923969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.435 [2024-07-25 00:02:32.923990] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.435 [2024-07-25 00:02:32.924220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.435 [2024-07-25 00:02:32.924464] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.435 [2024-07-25 00:02:32.924486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.435 [2024-07-25 00:02:32.924500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.435 [2024-07-25 00:02:32.927726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.435 [2024-07-25 00:02:32.937125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.435 [2024-07-25 00:02:32.937539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.435 [2024-07-25 00:02:32.937567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.435 [2024-07-25 00:02:32.937582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.435 [2024-07-25 00:02:32.937798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.435 [2024-07-25 00:02:32.938025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.435 [2024-07-25 00:02:32.938046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.435 [2024-07-25 00:02:32.938059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.435 [2024-07-25 00:02:32.941178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.435 [2024-07-25 00:02:32.950690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.435 [2024-07-25 00:02:32.951071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.435 [2024-07-25 00:02:32.951099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.435 [2024-07-25 00:02:32.951115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.435 [2024-07-25 00:02:32.951354] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.435 [2024-07-25 00:02:32.951567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.435 [2024-07-25 00:02:32.951588] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.435 [2024-07-25 00:02:32.951601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.435 [2024-07-25 00:02:32.954811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.435 [2024-07-25 00:02:32.964234] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.435 [2024-07-25 00:02:32.964594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.435 [2024-07-25 00:02:32.964622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.435 [2024-07-25 00:02:32.964638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.435 [2024-07-25 00:02:32.964853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.436 [2024-07-25 00:02:32.965081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.436 [2024-07-25 00:02:32.965106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.436 [2024-07-25 00:02:32.965120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.436 [2024-07-25 00:02:32.968321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.436 [2024-07-25 00:02:32.977708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.436 [2024-07-25 00:02:32.978103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.436 [2024-07-25 00:02:32.978131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.436 [2024-07-25 00:02:32.978146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.436 [2024-07-25 00:02:32.978370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.436 [2024-07-25 00:02:32.978603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.436 [2024-07-25 00:02:32.978624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.436 [2024-07-25 00:02:32.978637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.436 [2024-07-25 00:02:32.981809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.436 [2024-07-25 00:02:32.991348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.436 [2024-07-25 00:02:32.991755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.436 [2024-07-25 00:02:32.991782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.436 [2024-07-25 00:02:32.991798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.436 [2024-07-25 00:02:32.992013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.436 [2024-07-25 00:02:32.992250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.436 [2024-07-25 00:02:32.992271] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.436 [2024-07-25 00:02:32.992285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.436 [2024-07-25 00:02:32.995486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.436 [2024-07-25 00:02:33.005057] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.436 [2024-07-25 00:02:33.005432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.436 [2024-07-25 00:02:33.005462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.436 [2024-07-25 00:02:33.005480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.436 [2024-07-25 00:02:33.005709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.436 [2024-07-25 00:02:33.005923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.436 [2024-07-25 00:02:33.005944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.436 [2024-07-25 00:02:33.005957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.436 [2024-07-25 00:02:33.009187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.436 [2024-07-25 00:02:33.018634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.436 [2024-07-25 00:02:33.019014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.436 [2024-07-25 00:02:33.019044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.436 [2024-07-25 00:02:33.019060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.436 [2024-07-25 00:02:33.019289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.436 [2024-07-25 00:02:33.019509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.436 [2024-07-25 00:02:33.019531] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.436 [2024-07-25 00:02:33.019544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.436 [2024-07-25 00:02:33.022955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.436 [2024-07-25 00:02:33.032259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.436 [2024-07-25 00:02:33.032660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.436 [2024-07-25 00:02:33.032695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.436 [2024-07-25 00:02:33.032711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.436 [2024-07-25 00:02:33.032942] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.436 [2024-07-25 00:02:33.033170] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.436 [2024-07-25 00:02:33.033192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.436 [2024-07-25 00:02:33.033206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.436 [2024-07-25 00:02:33.036580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.695 [2024-07-25 00:02:33.045980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.695 [2024-07-25 00:02:33.046350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-25 00:02:33.046379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.695 [2024-07-25 00:02:33.046396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.695 [2024-07-25 00:02:33.046625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.695 [2024-07-25 00:02:33.046838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.695 [2024-07-25 00:02:33.046858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.695 [2024-07-25 00:02:33.046872] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.695 [2024-07-25 00:02:33.050105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.695 [2024-07-25 00:02:33.059605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.695 [2024-07-25 00:02:33.059955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-25 00:02:33.059983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.695 [2024-07-25 00:02:33.060000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.695 [2024-07-25 00:02:33.060235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.695 [2024-07-25 00:02:33.060479] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.695 [2024-07-25 00:02:33.060501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.695 [2024-07-25 00:02:33.060515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.695 [2024-07-25 00:02:33.063720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.695 [2024-07-25 00:02:33.073175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.695 [2024-07-25 00:02:33.073605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-25 00:02:33.073634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.695 [2024-07-25 00:02:33.073650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.695 [2024-07-25 00:02:33.073865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.695 [2024-07-25 00:02:33.074092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.695 [2024-07-25 00:02:33.074113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.695 [2024-07-25 00:02:33.074126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.695 [2024-07-25 00:02:33.077413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.695 [2024-07-25 00:02:33.086800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.695 [2024-07-25 00:02:33.087177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-25 00:02:33.087205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.695 [2024-07-25 00:02:33.087221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.695 [2024-07-25 00:02:33.087463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.695 [2024-07-25 00:02:33.087677] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.695 [2024-07-25 00:02:33.087697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.695 [2024-07-25 00:02:33.087711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.695 [2024-07-25 00:02:33.090927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.695 [2024-07-25 00:02:33.100396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.695 [2024-07-25 00:02:33.100773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-25 00:02:33.100802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.695 [2024-07-25 00:02:33.100818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.695 [2024-07-25 00:02:33.101033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.695 [2024-07-25 00:02:33.101270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.695 [2024-07-25 00:02:33.101291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.695 [2024-07-25 00:02:33.101309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.695 [2024-07-25 00:02:33.104489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.695 [2024-07-25 00:02:33.113911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.695 [2024-07-25 00:02:33.114293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-25 00:02:33.114322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.695 [2024-07-25 00:02:33.114338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.695 [2024-07-25 00:02:33.114554] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.695 [2024-07-25 00:02:33.114782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.695 [2024-07-25 00:02:33.114803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.695 [2024-07-25 00:02:33.114816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.695 [2024-07-25 00:02:33.118033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.695 [2024-07-25 00:02:33.127487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.695 [2024-07-25 00:02:33.127829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.695 [2024-07-25 00:02:33.127857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.695 [2024-07-25 00:02:33.127873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.696 [2024-07-25 00:02:33.128104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.696 [2024-07-25 00:02:33.128346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.696 [2024-07-25 00:02:33.128368] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.696 [2024-07-25 00:02:33.128382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.696 [2024-07-25 00:02:33.131573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.696 [2024-07-25 00:02:33.141008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.696 [2024-07-25 00:02:33.141415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-25 00:02:33.141443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.696 [2024-07-25 00:02:33.141460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.696 [2024-07-25 00:02:33.141675] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.696 [2024-07-25 00:02:33.141902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.696 [2024-07-25 00:02:33.141922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.696 [2024-07-25 00:02:33.141936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.696 [2024-07-25 00:02:33.145127] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.696 [2024-07-25 00:02:33.154591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.696 [2024-07-25 00:02:33.154993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-25 00:02:33.155022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.696 [2024-07-25 00:02:33.155038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.696 [2024-07-25 00:02:33.155277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.696 [2024-07-25 00:02:33.155491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.696 [2024-07-25 00:02:33.155511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.696 [2024-07-25 00:02:33.155525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.696 [2024-07-25 00:02:33.158746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.696 [2024-07-25 00:02:33.168013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.696 [2024-07-25 00:02:33.168427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-25 00:02:33.168455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.696 [2024-07-25 00:02:33.168471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.696 [2024-07-25 00:02:33.168686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.696 [2024-07-25 00:02:33.168914] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.696 [2024-07-25 00:02:33.168935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.696 [2024-07-25 00:02:33.168948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.696 [2024-07-25 00:02:33.172085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.696 [2024-07-25 00:02:33.181524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.696 [2024-07-25 00:02:33.181930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-25 00:02:33.181958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.696 [2024-07-25 00:02:33.181974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.696 [2024-07-25 00:02:33.182189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.696 [2024-07-25 00:02:33.182447] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.696 [2024-07-25 00:02:33.182469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.696 [2024-07-25 00:02:33.182483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.696 [2024-07-25 00:02:33.185715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.696 [2024-07-25 00:02:33.195125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.696 [2024-07-25 00:02:33.195536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-25 00:02:33.195565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.696 [2024-07-25 00:02:33.195580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.696 [2024-07-25 00:02:33.195815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.696 [2024-07-25 00:02:33.196029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.696 [2024-07-25 00:02:33.196049] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.696 [2024-07-25 00:02:33.196063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.696 [2024-07-25 00:02:33.199265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.696 [2024-07-25 00:02:33.208683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.696 [2024-07-25 00:02:33.209048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-25 00:02:33.209076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.696 [2024-07-25 00:02:33.209092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.696 [2024-07-25 00:02:33.209330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.696 [2024-07-25 00:02:33.209543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.696 [2024-07-25 00:02:33.209564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.696 [2024-07-25 00:02:33.209576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.696 [2024-07-25 00:02:33.212786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.696 [2024-07-25 00:02:33.222189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.696 [2024-07-25 00:02:33.222605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-25 00:02:33.222634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.696 [2024-07-25 00:02:33.222650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.696 [2024-07-25 00:02:33.222865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.696 [2024-07-25 00:02:33.223093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.696 [2024-07-25 00:02:33.223114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.696 [2024-07-25 00:02:33.223127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.696 [2024-07-25 00:02:33.226363] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.696 [2024-07-25 00:02:33.235750] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.696 [2024-07-25 00:02:33.236122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-25 00:02:33.236150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.696 [2024-07-25 00:02:33.236165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.696 [2024-07-25 00:02:33.236389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.696 [2024-07-25 00:02:33.236622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.696 [2024-07-25 00:02:33.236643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.696 [2024-07-25 00:02:33.236661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.696 [2024-07-25 00:02:33.239835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.696 [2024-07-25 00:02:33.250165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.696 [2024-07-25 00:02:33.250621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 [2024-07-25 00:02:33.250668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.696 [2024-07-25 00:02:33.250696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.696 [2024-07-25 00:02:33.250987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.696 [2024-07-25 00:02:33.251317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.696 [2024-07-25 00:02:33.251348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.696 [2024-07-25 00:02:33.251373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.696 [2024-07-25 00:02:33.255356] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.696 [2024-07-25 00:02:33.264359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.696 [2024-07-25 00:02:33.264753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.696 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:02.696 [2024-07-25 00:02:33.264784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.697 [2024-07-25 00:02:33.264800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.697 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:25:02.697 [2024-07-25 00:02:33.265031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.697 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:02.697 [2024-07-25 00:02:33.265272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.697 [2024-07-25 00:02:33.265295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.697 [2024-07-25 00:02:33.265309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.697 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:02.697 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:02.697 [2024-07-25 00:02:33.268594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.697 [2024-07-25 00:02:33.277882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.697 [2024-07-25 00:02:33.278256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-25 00:02:33.278285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.697 [2024-07-25 00:02:33.278302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.697 [2024-07-25 00:02:33.278518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.697 [2024-07-25 00:02:33.278746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.697 [2024-07-25 00:02:33.278767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.697 [2024-07-25 00:02:33.278786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.697 [2024-07-25 00:02:33.281970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.697 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.697 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:02.697 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.697 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:02.697 [2024-07-25 00:02:33.287457] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.697 [2024-07-25 00:02:33.291633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.697 [2024-07-25 00:02:33.291971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.697 [2024-07-25 00:02:33.291999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.697 [2024-07-25 00:02:33.292015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.697 [2024-07-25 00:02:33.292231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.697 [2024-07-25 00:02:33.292488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.697 [2024-07-25 00:02:33.292510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.697 [2024-07-25 00:02:33.292524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.697 [2024-07-25 00:02:33.295925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.697 [2024-07-25 00:02:33.305307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.962 [2024-07-25 00:02:33.305642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-25 00:02:33.305670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.962 [2024-07-25 00:02:33.305686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.962 [2024-07-25 00:02:33.305901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.962 [2024-07-25 00:02:33.306130] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.962 [2024-07-25 00:02:33.306151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.962 [2024-07-25 00:02:33.306165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.962 [2024-07-25 00:02:33.309383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:02.962 [2024-07-25 00:02:33.318971] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.962 [2024-07-25 00:02:33.319433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-25 00:02:33.319464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.962 [2024-07-25 00:02:33.319481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.962 [2024-07-25 00:02:33.319723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.962 [2024-07-25 00:02:33.319938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.962 [2024-07-25 00:02:33.319958] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.962 [2024-07-25 00:02:33.319973] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.962 [2024-07-25 00:02:33.323220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.962 [2024-07-25 00:02:33.332669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.962 Malloc0 00:25:02.962 [2024-07-25 00:02:33.333213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-25 00:02:33.333254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.962 [2024-07-25 00:02:33.333274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.962 [2024-07-25 00:02:33.333497] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.962 [2024-07-25 00:02:33.333730] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.962 [2024-07-25 00:02:33.333751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.962 [2024-07-25 00:02:33.333767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:02.962 [2024-07-25 00:02:33.337031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:02.962 [2024-07-25 00:02:33.346382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.962 [2024-07-25 00:02:33.346783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:02.962 [2024-07-25 00:02:33.346811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cecac0 with addr=10.0.0.2, port=4420 00:25:02.962 [2024-07-25 00:02:33.346828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cecac0 is same with the state(5) to be set 00:25:02.962 [2024-07-25 00:02:33.347058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cecac0 (9): Bad file descriptor 00:25:02.962 [2024-07-25 00:02:33.347299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.962 [2024-07-25 00:02:33.347322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:02.962 [2024-07-25 00:02:33.347336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:02.962 [2024-07-25 00:02:33.350736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:02.962 [2024-07-25 00:02:33.352984] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:02.962 00:02:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3477813 00:25:02.962 [2024-07-25 00:02:33.360031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:02.962 [2024-07-25 00:02:33.391518] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:12.923 00:25:12.923 Latency(us) 00:25:12.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.923 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:12.923 Verification LBA range: start 0x0 length 0x4000 00:25:12.923 Nvme1n1 : 15.01 6293.98 24.59 10211.54 0.00 7730.78 843.47 17767.54 00:25:12.923 =================================================================================================================== 00:25:12.923 Total : 6293.98 24.59 10211.54 0.00 7730.78 843.47 17767.54 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:12.923 rmmod nvme_tcp 00:25:12.923 rmmod nvme_fabrics 00:25:12.923 rmmod nvme_keyring 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:25:12.923 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 3478519 ']' 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 3478519 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 3478519 ']' 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 3478519 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3478519 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3478519' 00:25:12.924 killing process with pid 3478519 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 3478519 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 3478519 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.924 00:02:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:14.295 00:25:14.295 real 0m22.743s 00:25:14.295 user 1m1.415s 00:25:14.295 sys 0m4.102s 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:14.295 ************************************ 00:25:14.295 END TEST nvmf_bdevperf 00:25:14.295 ************************************ 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.295 ************************************ 00:25:14.295 START TEST nvmf_target_disconnect 00:25:14.295 ************************************ 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:14.295 * Looking for test storage... 00:25:14.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:14.295 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:25:14.296 00:02:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:16.196 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:16.196 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:16.196 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:16.196 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:16.196 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:16.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:16.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:25:16.196 00:25:16.196 --- 10.0.0.2 ping statistics --- 00:25:16.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.197 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:25:16.197 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:16.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:16.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:25:16.197 00:25:16.197 --- 10.0.0.1 ping statistics --- 00:25:16.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:16.197 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:25:16.197 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:16.197 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:25:16.197 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:16.197 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:16.197 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:16.197 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:16.197 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:16.197 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:16.197 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:16.197 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:16.197 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:16.197 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:16.197 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:16.455 ************************************ 00:25:16.455 START TEST nvmf_target_disconnect_tc1 00:25:16.455 ************************************ 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:16.455 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.455 [2024-07-25 00:02:46.895093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:16.455 [2024-07-25 00:02:46.895160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11501a0 with addr=10.0.0.2, port=4420 00:25:16.455 [2024-07-25 00:02:46.895200] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:16.455 [2024-07-25 00:02:46.895221] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:16.455 [2024-07-25 00:02:46.895235] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:25:16.455 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:16.455 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:16.455 Initializing NVMe Controllers 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:16.455 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:16.456 00:25:16.456 real 0m0.094s 00:25:16.456 user 0m0.031s 00:25:16.456 sys 0m0.062s 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:16.456 ************************************ 00:25:16.456 END TEST nvmf_target_disconnect_tc1 00:25:16.456 ************************************ 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:16.456 ************************************ 00:25:16.456 START TEST nvmf_target_disconnect_tc2 00:25:16.456 ************************************ 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3481667 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3481667 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3481667 ']' 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:16.456 00:02:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:16.456 [2024-07-25 00:02:47.005333] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:25:16.456 [2024-07-25 00:02:47.005417] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.456 EAL: No free 2048 kB hugepages reported on node 1 00:25:16.714 [2024-07-25 00:02:47.081147] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:16.714 [2024-07-25 00:02:47.193138] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.714 [2024-07-25 00:02:47.193194] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.714 [2024-07-25 00:02:47.193213] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:16.714 [2024-07-25 00:02:47.193231] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:16.714 [2024-07-25 00:02:47.193267] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.714 [2024-07-25 00:02:47.193371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:16.714 [2024-07-25 00:02:47.196262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:16.714 [2024-07-25 00:02:47.196337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:16.714 [2024-07-25 00:02:47.196341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:16.972 Malloc0 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:16.972 [2024-07-25 00:02:47.383874] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:16.972 [2024-07-25 00:02:47.412140] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3481702 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:16.972 00:02:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:16.972 EAL: No free 2048 kB hugepages reported on node 1 00:25:18.910 00:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3481667 00:25:18.910 00:02:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 [2024-07-25 00:02:49.438233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Write completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 Read completed with error (sct=0, sc=8) 00:25:18.910 starting I/O failed 00:25:18.910 [2024-07-25 00:02:49.438625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:18.910 [2024-07-25 00:02:49.438860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.910 [2024-07-25 00:02:49.438891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.910 qpair failed and we were unable to recover it. 00:25:18.910 [2024-07-25 00:02:49.439056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.910 [2024-07-25 00:02:49.439088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.439211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.439238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.439393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.439420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.439544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.439570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.439727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.439753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.439872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.439898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.440048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.440074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.440199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.440224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.440377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.440403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.440573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.440600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.440718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.440744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.440907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.440933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.441053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.441080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.441219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.441252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.441397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.441424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.441576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.441602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.441755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.441781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.441925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.441951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.442090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.442116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.442230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.442263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.442385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.442412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.442540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.442566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.442706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.442732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.442904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.442930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.443075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.443101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.443247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.443273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.443389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.443415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.443564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.443591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.443733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.443760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.443880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.443906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.444018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.444045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 [2024-07-25 00:02:49.444192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.911 [2024-07-25 00:02:49.444219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:18.911 qpair failed and we were unable to recover it. 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Write completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Write completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Write completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.911 Read completed with error (sct=0, sc=8) 00:25:18.911 starting I/O failed 00:25:18.912 Read completed with error (sct=0, sc=8) 00:25:18.912 starting I/O failed 00:25:18.912 Read completed with error (sct=0, sc=8) 00:25:18.912 starting I/O failed 00:25:18.912 Read completed with error (sct=0, sc=8) 00:25:18.912 starting I/O failed 00:25:18.912 Write completed with error (sct=0, sc=8) 00:25:18.912 starting I/O failed 00:25:18.912 Read completed with error (sct=0, sc=8) 00:25:18.912 starting I/O failed 00:25:18.912 Write completed with error (sct=0, sc=8) 00:25:18.912 starting I/O failed 00:25:18.912 Write completed with error (sct=0, sc=8) 00:25:18.912 starting I/O failed 00:25:18.912 Write completed with error (sct=0, sc=8) 00:25:18.912 starting I/O failed 00:25:18.912 Write completed with error (sct=0, sc=8) 00:25:18.912 starting I/O failed 00:25:18.912 Write completed with error (sct=0, sc=8) 00:25:18.912 starting I/O failed 00:25:18.912 Read completed with error (sct=0, sc=8) 00:25:18.912 starting I/O failed 00:25:18.912 Read completed with error (sct=0, sc=8) 00:25:18.912 starting I/O failed 00:25:18.912 Read completed with error (sct=0, sc=8) 00:25:18.912 starting I/O failed 00:25:18.912 [2024-07-25 00:02:49.444574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:18.912 [2024-07-25 00:02:49.444760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.444791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.444953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.444979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.445125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.445150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.445271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.445308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.445430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.445455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.445581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.445606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.445750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.445775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.445895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.445919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.446039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.446064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.446216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.446247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.446379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.446404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.446544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.446570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.446711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.446735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.446878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.446902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.447020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.447045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.447157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.447186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.447303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.447329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.447446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.447470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.447583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.447608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.447776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.447801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.447974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.447999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.448142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.448168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.448284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.448310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.448428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.448453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.448610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.448635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.448774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.448798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.448942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.448966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.449083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.449108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.449223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.449255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.449378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.449404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.449517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.449542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.449687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.449712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.449852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.912 [2024-07-25 00:02:49.449878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.912 qpair failed and we were unable to recover it. 00:25:18.912 [2024-07-25 00:02:49.449994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.450018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.450130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.450155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.450268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.450303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.450457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.450483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.450647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.450672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.450799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.450824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.450969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.450995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.451133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.451158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.451302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.451328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.451472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.451501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.451627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.451651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.451762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.451787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.451929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.451953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.452070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.452094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.452208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.452233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.452414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.452438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.452585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.452610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.452754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.452780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.452918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.452942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.453079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.453104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.453271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.453297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.453465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.453490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.453636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.453662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.453809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.453835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.453952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.453978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.454155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.454181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.454311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.454337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.454451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.454476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.454645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.454671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.454787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.454812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.454982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.455007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.455148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.455173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.455325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.455351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.455494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.455519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.455666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.455692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.455831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.455856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.456001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.456030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.913 [2024-07-25 00:02:49.456178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.913 [2024-07-25 00:02:49.456203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.913 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.456356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.456382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.456497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.456523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.456691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.456716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.456836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.456861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.457003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.457028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.457194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.457219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.457337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.457362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.457482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.457507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.457630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.457656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.457769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.457794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.457962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.457988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.458096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.458121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.458236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.458268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.458389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.458416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.458555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.458580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.458723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.458748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.458857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.458882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.459025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.459050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.459157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.459182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.459294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.459320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.459460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.459486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.459601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.459628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.459770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.459795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.459908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.459933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.460072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.460097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.460262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.460288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.460414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.460440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.460608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.460634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.460776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.460801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.460912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.460937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.461076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.461103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.461247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.461274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.461384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.461410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.461522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.461547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.461683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.461708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.914 qpair failed and we were unable to recover it. 00:25:18.914 [2024-07-25 00:02:49.461876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.914 [2024-07-25 00:02:49.461901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.462015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.462040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.462180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.462205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.462348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.462374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.462499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.462525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.462691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.462716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.462831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.462856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.462968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.462993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.463139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.463164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.463277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.463303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.463474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.463499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.463669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.463695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.463831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.463856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.464000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.464025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.464139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.464164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.464277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.464303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.464421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.464446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.464592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.464617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.464749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.464773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.464887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.464912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.465080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.465106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.465255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.465281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.465422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.465447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.465567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.465593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.465731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.465756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.465900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.465927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.466093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.466118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.466259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.466284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.466429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.466454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.466562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.466586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.466729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.466754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.466897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.466926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.467040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.467065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.467192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.467217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.467334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.467359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.467478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.467503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.467646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.467671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.467837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.916 [2024-07-25 00:02:49.467861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.916 qpair failed and we were unable to recover it. 00:25:18.916 [2024-07-25 00:02:49.468015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.468039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.468181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.468207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.468383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.468409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.468551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.468576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.468711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.468736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.468855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.468880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.469051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.469076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.469193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.469219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.469390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.469415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.469531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.469556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.469694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.469719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.469864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.469888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.470031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.470056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.470169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.470193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.470336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.470361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.470530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.470556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.470696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.470721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.470865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.470889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.471000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.471026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.471172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.471197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.471344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.471373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.471489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.471515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.471623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.471647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.471766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.471793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.471905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.471929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.472047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.472072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.472191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.472216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.472343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.472367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.472514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.472538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.472661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.472686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.472822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.472847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.472993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.473018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.473163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.473188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.473363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.473389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.473563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.473588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.473753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.473778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.473943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.473968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.474132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.474157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.917 qpair failed and we were unable to recover it. 00:25:18.917 [2024-07-25 00:02:49.474273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.917 [2024-07-25 00:02:49.474300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.474431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.474456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.474598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.474623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.474790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.474815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.474932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.474957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.475069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.475094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.475239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.475270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.475413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.475437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.475550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.475574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.475716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.475745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.475895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.475921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.476088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.476113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.476265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.476291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.476407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.476432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.476601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.476626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.476747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.476772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.476911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.476937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.477086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.477111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.477259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.477284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.477428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.477453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.477626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.477652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.477821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.477847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.477990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.478029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.478151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.478179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.478316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.478342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.478462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.478486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.478601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.478626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.478760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.478785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.478954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.478979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.479124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.479149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.479297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.479323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.479467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.479492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.479611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.479636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.479754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.479779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.479953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.479978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.480114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.480139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.480285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.480310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.480437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.480462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.918 qpair failed and we were unable to recover it. 00:25:18.918 [2024-07-25 00:02:49.480602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.918 [2024-07-25 00:02:49.480628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.480767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.480792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.480937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.480962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.481097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.481121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.481265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.481290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.481438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.481463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.481607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.481632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.481768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.481793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.481901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.481925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.482068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.482093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.482196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.482221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.482383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.482408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.482522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.482551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.482721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.482746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.482887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.482912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.483050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.483075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.483184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.483209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.483359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.483386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.483534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.483559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.483677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.483701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.483867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.483892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.484028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.484053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.484188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.484213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.484338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.484364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.484486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.484512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.484679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.484704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.484856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.484881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.485048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.485072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.485195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.485219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.485368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.485393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.485538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.485564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.485709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.485734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.485849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.485873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.485993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.486018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.486128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.486153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.486317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.486343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.486468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.486494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.486606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.486631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.919 [2024-07-25 00:02:49.486777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.919 [2024-07-25 00:02:49.486801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.919 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.486966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.486995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.487140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.487165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.487304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.487329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.487478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.487503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.487650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.487675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.487810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.487834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.487957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.487982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.488107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.488131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.488297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.488322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.488458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.488484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.488653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.488678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.488795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.488820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.488991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.489016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.489133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.489158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.489307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.489333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.489501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.489527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.489651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.489676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.489822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.489847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.489958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.489984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.490105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.490129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.490275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.490301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.490421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.490445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.490586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.490611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.490748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.490774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.490920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.490945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.491058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.491083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.491193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.491217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.491369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.491397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.491568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.491593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.491708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.491732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.491879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.491903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.492018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.492043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.920 [2024-07-25 00:02:49.492184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.920 [2024-07-25 00:02:49.492210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.920 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.492393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.492420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.492564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.492590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.492725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.492749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.492885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.492910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.493021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.493046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.493214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.493238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.493392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.493418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.493532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.493558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.493699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.493723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.493867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.493892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.494007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.494031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.494175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.494200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.494351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.494377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.494516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.494541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.494692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.494717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.494860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.494885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.495030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.495055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.495197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.495221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.495365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.495391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.495509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.495534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.495660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.495686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.495803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.495827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.495949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.495975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.496154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.496180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.496322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.496348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.496493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.496518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.496665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.496690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.496814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.496839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.496979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.497004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.497121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.497148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.497293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.497318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.497439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.497465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.497609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.497635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.497801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.497826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.497944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.497970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.498114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.498139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.498313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.921 [2024-07-25 00:02:49.498339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.921 qpair failed and we were unable to recover it. 00:25:18.921 [2024-07-25 00:02:49.498483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.498508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.498631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.498656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.498771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.498796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.498933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.498958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.499104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.499130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.499301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.499328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.499442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.499469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.499611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.499637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.499753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.499778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.499886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.499911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.500025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.500050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.500160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.500185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.500359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.500386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.500498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.500523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.500664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.500689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.500829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.500855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.500968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.500993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.501102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.501127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.501240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.501272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.501388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.501415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.501531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.501557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.501699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.501724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.501863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.501888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.502035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.502060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.502173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.502198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.502324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.502354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.502463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.502490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.502633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.502658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.502793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.502818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.502984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.503009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.503152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.503177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.503318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.503344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.503491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.503516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.503654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.503679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.503816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.503841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.503979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.504005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.504170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.504195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.922 qpair failed and we were unable to recover it. 00:25:18.922 [2024-07-25 00:02:49.504367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.922 [2024-07-25 00:02:49.504393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.504544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.504569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.504743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.504768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.504910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.504935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.505108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.505133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.505294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.505320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.505462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.505487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.505627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.505652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.505790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.505816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.505983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.506008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.506178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.506203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.506327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.506353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.506501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.506526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.506662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.506687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.506828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.506853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.507025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.507056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.507197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.507222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.507374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.507399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.507509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.507533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.507668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.507693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.507810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.507834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.507951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.507975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.508119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.508144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.508309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.508336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.508483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.508509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.508669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.508693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.508836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.508861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.509004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.509030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.509148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.509172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.509327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.509353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.509523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.509548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.509662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.509686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.509824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.509849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.509987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.510011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.510128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.510153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.510298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.510324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.510493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.510517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.510635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.510660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.923 [2024-07-25 00:02:49.510773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.923 [2024-07-25 00:02:49.510798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.923 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.510917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.510941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.511083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.511108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.511230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.511261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.511380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.511405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.511527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.511551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.511669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.511694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.511841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.511866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.511985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.512011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.512162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.512186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.512328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.512353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.512499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.512524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.512683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.512708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.512828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.512853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.512968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.512994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.513110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.513135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.513248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.513274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.513388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.513415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.513539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.513565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.513691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.513715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.513867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.513892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.514028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.514053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.514174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.514198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.514349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.514375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:18.924 [2024-07-25 00:02:49.514511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:18.924 [2024-07-25 00:02:49.514536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:18.924 qpair failed and we were unable to recover it. 00:25:19.204 [2024-07-25 00:02:49.514658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.204 [2024-07-25 00:02:49.514683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.204 qpair failed and we were unable to recover it. 00:25:19.204 [2024-07-25 00:02:49.514827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.514852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.514966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.514991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.515139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.515165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.515294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.515319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.515429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.515454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.515594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.515621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.515741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.515766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.515939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.515964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.516105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.516131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.516275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.516301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.516421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.516446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.516550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.516574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.516697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.516722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.516827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.516852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.516963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.516986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.517123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.517148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.517322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.517347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.517462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.517486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.517623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.517649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.517769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.517799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.517915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.517941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.518085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.518109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.518252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.518278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.518392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.518417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.518531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.518555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.518671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.518696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.518831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.518856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.518994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.519019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.519159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.519183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.519306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.519332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.519446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.519470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.519609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.519633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.519758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.519784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.519894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.519919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.520068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.520093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.520210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.520235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.520426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.520451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.520598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.205 [2024-07-25 00:02:49.520624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.205 qpair failed and we were unable to recover it. 00:25:19.205 [2024-07-25 00:02:49.520789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.520815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.520923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.520949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.521089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.521116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Write completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Write completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Write completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Write completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Write completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Write completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Write completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Write completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Read completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Write completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Write completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Write completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 Write completed with error (sct=0, sc=8) 00:25:19.206 starting I/O failed 00:25:19.206 [2024-07-25 00:02:49.521450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:19.206 [2024-07-25 00:02:49.521617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.521657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.521801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.521828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.521966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.521993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.522133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.522160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.522324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.522350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.522483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.522509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.522624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.522649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.522793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.522820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.522942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.522971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.523126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.523156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.523306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.523333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.523484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.523510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.523662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.523688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.523802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.523827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.523970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.523996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.524117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.524145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.524315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.524342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.524488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.524514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.524632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.524659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.524783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.524810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.524971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.524997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.525142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.525169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.525343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.525369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.206 [2024-07-25 00:02:49.525510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.206 [2024-07-25 00:02:49.525535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.206 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.525676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.525702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.525823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.525849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.525990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.526016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.526156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.526182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.526322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.526348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.526518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.526544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.526656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.526682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.526856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.526881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.527025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.527051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.527207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.527252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.527431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.527459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.527580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.527606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.527764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.527792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.527960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.527985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.528097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.528129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.528277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.528304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.528446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.528472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.528615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.528641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.528753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.528778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.528928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.528954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.529102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.529127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.529252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.529278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.529423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.529449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.529612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.529638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.529778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.529805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.529946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.529973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.530089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.530115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.530267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.530307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.530444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.530471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.530646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.530672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.530839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.530865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.531009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.531037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.531205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.531231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.531412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.531439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.531586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.531612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.531756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.531781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.531950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.207 [2024-07-25 00:02:49.531975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.207 qpair failed and we were unable to recover it. 00:25:19.207 [2024-07-25 00:02:49.532088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.532113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.532256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.532282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.532400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.532426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.532560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.532585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.532702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.532727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.532872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.532897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.533040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.533066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.533204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.533230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.533355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.533381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.533493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.533519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.533667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.533693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.533860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.533886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.534031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.534058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.534199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.534225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.534351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.534377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.534495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.534521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.534640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.534665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.534835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.534865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.535013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.535038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.535157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.535183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.535306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.535333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.535451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.535476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.535588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.535613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.535770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.535804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.535952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.535979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.536147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.536173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.536296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.536322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.536432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.536458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.536600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.536626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.536761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.536786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.536893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.536918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.537070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.537095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.537209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.537235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.537366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.537391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.537506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.537531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.537640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.537665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.537833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.537858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.208 qpair failed and we were unable to recover it. 00:25:19.208 [2024-07-25 00:02:49.538008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.208 [2024-07-25 00:02:49.538033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.538179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.538204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.538352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.538378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.538495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.538520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.538634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.538659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.538803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.538829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.538945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.538970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.539096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.539135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.539320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.539349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.539492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.539519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.539628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.539654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.539829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.539855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.539976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.540002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.540148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.540175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.540295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.540323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.540466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.540493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.540634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.540660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.540800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.540826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.540991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.541018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.541122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.541148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.541282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.541309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.541459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.541485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.541598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.541626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.541771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.541798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.541924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.541950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.542099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.542127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.542300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.542327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.542472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.542498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.542608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.542633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.542743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.542768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.542929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.542955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.209 [2024-07-25 00:02:49.543123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.209 [2024-07-25 00:02:49.543148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.209 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.543316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.543342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.543489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.543515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.543660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.543685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.543829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.543855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.543964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.543990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.544108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.544137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.544282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.544309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.544455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.544481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.544592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.544617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.544759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.544787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.544949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.544975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.545114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.545140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.545287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.545314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.545464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.545490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.545602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.545629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.545768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.545799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.545952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.545978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.546120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.546147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.546293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.546321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.546492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.546518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.546634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.546660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.546803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.546829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.546973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.546998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.547139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.547164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.547311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.547339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.547456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.547482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.547620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.547646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.547794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.547820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.547959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.547985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.548157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.548183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.548330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.548357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.548502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.548529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.548696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.548722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.548845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.548871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.549021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.549047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.549188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.549214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.549352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.210 [2024-07-25 00:02:49.549379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.210 qpair failed and we were unable to recover it. 00:25:19.210 [2024-07-25 00:02:49.549519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.549546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.549664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.549692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.549842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.549868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.550005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.550031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.550169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.550195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.550340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.550367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.550506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.550532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.550677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.550702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.550825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.550850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.550962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.550988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.551140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.551168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.551313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.551340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.551482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.551508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.551651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.551677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.551814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.551841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.551982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.552008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.552150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.552176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.552320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.552347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.552514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.552543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.552680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.552707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.552824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.552850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.552989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.553015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.553163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.553189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.553305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.553332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.553471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.553497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.553645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.553671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.553815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.553842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.553985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.554011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.554178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.554204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.554379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.554406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.554552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.554577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.554718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.554745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.554890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.554916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.555055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.555081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.555226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.555260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.555383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.555408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.555528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.211 [2024-07-25 00:02:49.555553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.211 qpair failed and we were unable to recover it. 00:25:19.211 [2024-07-25 00:02:49.555723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.555748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.555918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.555944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.556085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.556110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.556232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.556276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.556422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.556448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.556619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.556645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.556787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.556813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.556954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.556980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.557126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.557152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.557298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.557325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.557491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.557517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.557685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.557710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.557829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.557855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.558003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.558028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.558169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.558194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.558336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.558365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.558511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.558538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.558693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.558719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.558861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.558889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.559029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.559055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.559172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.559198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.559317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.559347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.559519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.559545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.559710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.559736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.559873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.559899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.560044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.560071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.560189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.560217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.560394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.560421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.560558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.560583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.560699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.560725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.560866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.560891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.561057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.561082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.561252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.561278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.561394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.561419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.561565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.561591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.561711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.561737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.561872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.561897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.212 [2024-07-25 00:02:49.562042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.212 [2024-07-25 00:02:49.562067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.212 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.562206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.562232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.562405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.562431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.562546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.562572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.562690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.562715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.562882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.562908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.563023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.563049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.563186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.563211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.563358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.563385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.563503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.563529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.563664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.563689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.563832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.563857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.564001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.564026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.564174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.564199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.564316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.564342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.564453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.564480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.564655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.564680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.564844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.564870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.565012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.565037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.565188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.565214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.565355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.565381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.565495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.565521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.565666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.565691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.565832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.565857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.566021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.566050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.566189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.566213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.566400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.566425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.566539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.566564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.566684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.566709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.566855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.566881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.566996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.567023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.567171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.567197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.567316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.567343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.567511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.567536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.567677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.567704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.567857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.567882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.568030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.568055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.568176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.213 [2024-07-25 00:02:49.568201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.213 qpair failed and we were unable to recover it. 00:25:19.213 [2024-07-25 00:02:49.568361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.568387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.568543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.568569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.568706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.568731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.568880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.568905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.569074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.569100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.569272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.569298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.569409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.569435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.569576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.569602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.569748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.569772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.569895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.569920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.570044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.570070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.570234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.570265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.570376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.570401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.570514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.570539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.570677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.570702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.570818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.570843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.571006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.571032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.571151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.571177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.571333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.571373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.571526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.571553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.571722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.571748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.571889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.571915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.572084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.572110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.572216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.572248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.572410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.572436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.572549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.572575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.572723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.572754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.572904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.572930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.573068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.573093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.573211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.573237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.573387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.573413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.573531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.573557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.214 qpair failed and we were unable to recover it. 00:25:19.214 [2024-07-25 00:02:49.573672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.214 [2024-07-25 00:02:49.573698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.573843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.573869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.574011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.574037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.574152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.574178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.574312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.574339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.574482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.574508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.574657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.574683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.574813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.574842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.575019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.575045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.575189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.575215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.575402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.575431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.575600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.575626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.575769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.575796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.575982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.576011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.576182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.576208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.576387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.576414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.576578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.576607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.576751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.576778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.576919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.576944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.577087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.577117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.577265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.577292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.577438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.577487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.577638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.577666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.577814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.577840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.577955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.577983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.578139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.578169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.578335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.578363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.578502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.578544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.578733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.578783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.578956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.578982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.579117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.579143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.579257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.579283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.579423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.579449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.579559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.579601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.579725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.579758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.579940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.579967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.580155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.580183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.580340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.215 [2024-07-25 00:02:49.580369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.215 qpair failed and we were unable to recover it. 00:25:19.215 [2024-07-25 00:02:49.580535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.580561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.580718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.580747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.580937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.580963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.581106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.581132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.581247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.581290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.581484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.581513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.581703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.581729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.581871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.581897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.582065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.582091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.582259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.582286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.582404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.582448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.582646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.582672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.582790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.582816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.582982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.583008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.583151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.583198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.583390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.583416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.583556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.583582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.583726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.583767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.583932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.583958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.584137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.584165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.584334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.584363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.584504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.584530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.584680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.584706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.584877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.584903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.585050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.585076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.585250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.585277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.585420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.585446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.585583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.585610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.585747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.585791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.585959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.585985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.586124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.586150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.586307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.586337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.586493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.586521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.586681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.586708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.586847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.586892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.587016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.587045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.587177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.587208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.216 [2024-07-25 00:02:49.587359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.216 [2024-07-25 00:02:49.587386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.216 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.587500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.587526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.587744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.587770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.587929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.587957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.588136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.588165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.588325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.588352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.588491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.588535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.588719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.588748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.588906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.588934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.589072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.589100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.589267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.589300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.589445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.589471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.589592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.589618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.589758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.589786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.589951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.589977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.590088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.590114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.590254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.590283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.590425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.590451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.590589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.590630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.590825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.590851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.590998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.591024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.591164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.591189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.591383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.591409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.591548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.591573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.591731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.591760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.591926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.591954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.592130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.592156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.592300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.592344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.592499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.592528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.592693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.592717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.592916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.592944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.593140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.593165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.593286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.593312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.593461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.593505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.593686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.593715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.593882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.593907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.594068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.594096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.594270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.217 [2024-07-25 00:02:49.594295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.217 qpair failed and we were unable to recover it. 00:25:19.217 [2024-07-25 00:02:49.594438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.594465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.594584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.594614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.594770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.594795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.594906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.594933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.595046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.595072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.595259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.595286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.595432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.595457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.595598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.595639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.595826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.595854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.596023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.596049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.596211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.596240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.596405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.596434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.596603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.596629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.596793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.596818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.596956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.596984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.597151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.597177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.597324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.597350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.597490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.597515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.597634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.597659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.597840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.597883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.598006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.598034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.598202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.598227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.598365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.598391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.598557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.598585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.598780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.598805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.598913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.598938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.599075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.599100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.599249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.599275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.599406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.599450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.599607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.599636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.599803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.599828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.599945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.599970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.218 [2024-07-25 00:02:49.600116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.218 [2024-07-25 00:02:49.600142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.218 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.600295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.600321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.600485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.600513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.600636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.600664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.600795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.600822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.600964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.600990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.601190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.601219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.601394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.601420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.601533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.601577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.601740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.601773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.601961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.601987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.602155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.602183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.602338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.602367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.602516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.602541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.602651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.602677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.602847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.602873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.602983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.603007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.603175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.603218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.603387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.603416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.603558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.603585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.603700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.603726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.603844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.603870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.604021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.604046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.604163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.604205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.604374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.604401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.604516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.604542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.604710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.604735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.604851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.604877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.605070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.605096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.605238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.605270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.605400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.605428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.605617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.605643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.605803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.605831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.605956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.605986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.606178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.606204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.606338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.606364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.606536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.606565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.606729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.219 [2024-07-25 00:02:49.606754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.219 qpair failed and we were unable to recover it. 00:25:19.219 [2024-07-25 00:02:49.606860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.606886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.607054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.607083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.607253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.607279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.607400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.607429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.607604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.607632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.607797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.607824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.607982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.608010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.608141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.608169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.608360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.608386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.608547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.608575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.608733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.608761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.608921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.608951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.609088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.609131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.609285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.609314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.609486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.609511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.609624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.609650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.609794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.609819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.609955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.609980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.610144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.610170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.610312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.610338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.610474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.610499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.610612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.610637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.610750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.610775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.610915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.610940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.611096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.611124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.611299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.611325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.611439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.611466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.611577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.611604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.611802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.611831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.611988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.612013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.612159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.612202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.612340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.612370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.612530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.612556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.612718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.612746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.612866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.612895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.613033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.613060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.613166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.613193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.220 [2024-07-25 00:02:49.613356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.220 [2024-07-25 00:02:49.613385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.220 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.613531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.613558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.613736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.613780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.613962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.613990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.614175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.614200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.614347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.614374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.614485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.614527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.614695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.614721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.614882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.614911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.615068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.615096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.615263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.615289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.615475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.615503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.615685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.615714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.615844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.615869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.615983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.616013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.616153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.616179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.616321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.616347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.616471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.616496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.616616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.616641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.616779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.616805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.616916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.616941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.617112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.617137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.617279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.617306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.617445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.617471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.617612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.617639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.617846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.617871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.617983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.618027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.618152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.618181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.618364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.618391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.618544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.618572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.618721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.618749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.618881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.618906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.619066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.619093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.619212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.619247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.619432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.619458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.619609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.619638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.619785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.619814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.221 [2024-07-25 00:02:49.620002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.221 [2024-07-25 00:02:49.620028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.221 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.620216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.620251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.620411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.620440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.620599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.620624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.620736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.620762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.620935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.620963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.621134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.621160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.621318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.621347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.621496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.621525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.621691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.621716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.621861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.621902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.622081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.622110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.622238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.622283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.622395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.622422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.622565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.622590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.622727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.622752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.622868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.622913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.623092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.623124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.623255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.623281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.623422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.623447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.623566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.623591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.623714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.623739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.623853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.623878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.624051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.624080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.624236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.624268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.624416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.624441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.624548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.624573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.624721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.624747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.624864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.624907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.625027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.625056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.625192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.625217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.625395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.625420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.625552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.625581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.625769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.625795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.625914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.625938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.626080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.626106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.626291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.626318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.626490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.222 [2024-07-25 00:02:49.626519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.222 qpair failed and we were unable to recover it. 00:25:19.222 [2024-07-25 00:02:49.626681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.626709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.626872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.626898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.627035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.627060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.627228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.627266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.627435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.627460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.627573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.627598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.627739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.627765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.627901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.627926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.628034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.628060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.628163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.628188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.628333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.628358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.628477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.628503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.628648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.628677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.628836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.628862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.629010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.629035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.629176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.629201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.629348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.629374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.629530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.629558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.629694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.629722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.629862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.629888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.630033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.630059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.630221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.630269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.630410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.630436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.630572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.630598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.630743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.630771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.630926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.630952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.631065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.631090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.631232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.631266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.631437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.631464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.631623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.631652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.631811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.631840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.632030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.632055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.632219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.223 [2024-07-25 00:02:49.632253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.223 qpair failed and we were unable to recover it. 00:25:19.223 [2024-07-25 00:02:49.632423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.632451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.632608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.632633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.632772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.632815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.632933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.632963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.633092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.633117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.633266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.633310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.633464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.633493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.633632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.633659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.633831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.633857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.633989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.634017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.634179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.634205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.634353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.634380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.634523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.634549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.634759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.634789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.634988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.635017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.635164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.635193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.635333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.635360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.635486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.635512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.635623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.635648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.635783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.635809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.635982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.636012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.636172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.636200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.636367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.636394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.636580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.636608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.636758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.636786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.636946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.636972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.637138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.637164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.637339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.637365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.637483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.637509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.637623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.637650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.637789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.637818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.637950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.637976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.638122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.638164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.638316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.638346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.638496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.638522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.638660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.638685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.638818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.638843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.224 qpair failed and we were unable to recover it. 00:25:19.224 [2024-07-25 00:02:49.639022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.224 [2024-07-25 00:02:49.639047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.639167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.639192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.639327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.639353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.639524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.639550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.639662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.639688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.639800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.639826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.639976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.640001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.640112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.640155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.640319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.640345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.640457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.640482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.640628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.640668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.640814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.640842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.640998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.641023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.641215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.641251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.641404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.641432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.641562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.641588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.641736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.641766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.641908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.641936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.642066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.642091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.642237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.642297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.642417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.642445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.642610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.642635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.642792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.642820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.642978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.643003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.643141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.643169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.643306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.643333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.643450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.643475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.643619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.643644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.643798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.643826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.643963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.643991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.644178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.644203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.644376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.644402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.644534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.644562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.644738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.644763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.644917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.644944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.645126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.645154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.645316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.645342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.645476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.225 [2024-07-25 00:02:49.645518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.225 qpair failed and we were unable to recover it. 00:25:19.225 [2024-07-25 00:02:49.645675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.645705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.645893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.645918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.646033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.646077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.646261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.646293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.646464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.646489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.646650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.646691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.646812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.646838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.647008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.647033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.647160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.647188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.647365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.647394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.647562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.647587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.647701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.647727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.647860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.647885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.648025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.648051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.648237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.648273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.648431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.648456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.648572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.648597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.648723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.648749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.648915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.648947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.649095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.649121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.649261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.649295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.649442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.649471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.649663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.649688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.649806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.649848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.650001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.650030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.650173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.650201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.650398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.650424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.650531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.650556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.650662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.650687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.650796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.650822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.650989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.651017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.651146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.651172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.651314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.651340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.651522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.651548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.651656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.651681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.651792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.651817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.652012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.652040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.226 [2024-07-25 00:02:49.652176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.226 [2024-07-25 00:02:49.652201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.226 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.652383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.652409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.652568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.652596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.652737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.652762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.652908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.652933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.653082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.653107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.653270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.653298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.653425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.653450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.653600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.653628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.653791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.653816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.653933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.653959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.654094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.654122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.654309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.654335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.654502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.654530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.654677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.654705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.654869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.654895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.655040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.655066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.655223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.655260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.655432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.655457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.655568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.655593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.655757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.655785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.655951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.655981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.656124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.656149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.656292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.656318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.656459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.656486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.656649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.656677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.656797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.656826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.656971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.657001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.657147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.657173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.657345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.657374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.657528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.657553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.657739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.657767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.657925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.657953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.658091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.658134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.658330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.658356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.658509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.658535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.658705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.658730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.658841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.227 [2024-07-25 00:02:49.658866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.227 qpair failed and we were unable to recover it. 00:25:19.227 [2024-07-25 00:02:49.659032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.659060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.659220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.659258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.659423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.659451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.659634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.659662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.659858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.659883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.660038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.660065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.660210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.660239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.660441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.660466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.660617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.660645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.660811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.660838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.660999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.661025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.661164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.661207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.661336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.661365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.661497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.661522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.661690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.661733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.661897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.661922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.662066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.662091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.662268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.662311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.662483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.662509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.662676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.662702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.662811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.662855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.662988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.663016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.663179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.663205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.663382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.663412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.663534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.663560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.663708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.663733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.663849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.663876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.663989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.664015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.664158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.664185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.664379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.664408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.664536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.664564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.664728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.664754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.664893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.228 [2024-07-25 00:02:49.664938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.228 qpair failed and we were unable to recover it. 00:25:19.228 [2024-07-25 00:02:49.665092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.665120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.665317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.665343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.665464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.665489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.665626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.665651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.665800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.665825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.665989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.666014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.666214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.666248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.666387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.666413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.666529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.666554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.666671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.666696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.666811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.666837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.666974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.667015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.667170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.667198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.667343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.667369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.667503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.667544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.667703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.667731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.667892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.667918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.668061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.668087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.668223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.668255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.668368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.668394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.668538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.668580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.668731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.668759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.668897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.668922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.669061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.669086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.669261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.669290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.669434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.669459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.669599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.669640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.669789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.669817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.670006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.670031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.670144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.670169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.670295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.670324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.670469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.670495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.670604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.670629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.670740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.670765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.670937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.670962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.671066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.671090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.671208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.671233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.229 qpair failed and we were unable to recover it. 00:25:19.229 [2024-07-25 00:02:49.671389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.229 [2024-07-25 00:02:49.671414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.671602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.671630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.671793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.671821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.671982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.672007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.672115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.672157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.672312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.672338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.672457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.672482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.672623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.672664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.672818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.672847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.672987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.673012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.673150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.673175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.673335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.673364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.673504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.673529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.673672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.673712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.673884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.673909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.674021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.674046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.674230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.674267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.674420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.674447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.674578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.674602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.674740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.674765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.674937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.674965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.675096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.675122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.675272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.675313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.675462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.675490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.675651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.675676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.675813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.675856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.676010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.676038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.676191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.676216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.676328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.676364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.676552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.676581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.676741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.676767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.676877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.676903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.677078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.677106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.677289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.677319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.677440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.677466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.677585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.677610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.677748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.677773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.677912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.230 [2024-07-25 00:02:49.677937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.230 qpair failed and we were unable to recover it. 00:25:19.230 [2024-07-25 00:02:49.678063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.678091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.678266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.678299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.678442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.678468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.678602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.678630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.678793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.678817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.678958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.678984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.679126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.679154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.679284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.679311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.679429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.679454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.679611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.679640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.679833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.679859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.680021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.680051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.680177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.680205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.680355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.680382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.680525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.680571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.680768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.680794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.680966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.680991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.681093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.681137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.681318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.681344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.681489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.681515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.681632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.681676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.681830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.681859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.682004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.682037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.682158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.682184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.682399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.682428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.682593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.682620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.682763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.682789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.682938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.682967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.683136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.683161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.683303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.683329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.683502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.683528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.683643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.683669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.683779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.683804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.683971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.683996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.684135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.684160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.684331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.231 [2024-07-25 00:02:49.684361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.231 qpair failed and we were unable to recover it. 00:25:19.231 [2024-07-25 00:02:49.684508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.684536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.684706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.684732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.684899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.684924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.685089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.685117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.685285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.685311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.685465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.685493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.685652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.685681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.685838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.685863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.685982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.686027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.686181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.686210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.686392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.686419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.686577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.686607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.686795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.686824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.686990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.687015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.687170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.687200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.687383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.687409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.687537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.687564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.687750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.687778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.687930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.687958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.688120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.688147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.688259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.688314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.688434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.688463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.688612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.688638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.688786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.688837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.689033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.689062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.689219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.689254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.689410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.689436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.689578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.689608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.689761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.689788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.689903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.689929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.690042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.690075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.690192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.690218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.690371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.232 [2024-07-25 00:02:49.690398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.232 qpair failed and we were unable to recover it. 00:25:19.232 [2024-07-25 00:02:49.690539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.690565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.690685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.690713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.690867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.690892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.691009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.691035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.691217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.691253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.691408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.691440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.691571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.691601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.691744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.691770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.691890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.691915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.692042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.692069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.692234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.692278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.692414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.692440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.692604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.692633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.692819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.692846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.692981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.693007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.693122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.693148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.693298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.693331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.693462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.693489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.693640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.693665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.693820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.693846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.693961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.693988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.694170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.694201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.694353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.694380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.694497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.694525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.694665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.694691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.694808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.694835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.694976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.695009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.695161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.695188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.695348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.695376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.695498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.695524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.695648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.695674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.695792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.695823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.695939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.695964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.696102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.696141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.696297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.696326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.696471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.696498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.696661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.696690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.233 qpair failed and we were unable to recover it. 00:25:19.233 [2024-07-25 00:02:49.696850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.233 [2024-07-25 00:02:49.696879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.697007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.697036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.697175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.697201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.697379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.697406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.697538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.697565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.697685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.697711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.697833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.697860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.697981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.698007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.698203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.698233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.698380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.698416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.698608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.698634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.698794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.698820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.698936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.698963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.699108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.699134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.699256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.699284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.699398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.699425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.699590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.699619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.699762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.699788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.699928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.699954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.700068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.700093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.700228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.700262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.700374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.700400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.700546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.700572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.700697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.700743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.700866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.700895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.701046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.701075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.701232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.701265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.701434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.701460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.701597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.701624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.701794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.701820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.701961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.701989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.702188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.702218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.702377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.702408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.702536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.702566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.702756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.702783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.702921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.702947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.703067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.703096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.703232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.234 [2024-07-25 00:02:49.703272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.234 qpair failed and we were unable to recover it. 00:25:19.234 [2024-07-25 00:02:49.703408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.703433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.703571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.703598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.703752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.703778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.703895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.703922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.704066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.704092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.704210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.704236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.704373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.704400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.704534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.704569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.704700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.704726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.704863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.704890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.705062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.705091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.705209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.705247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.705402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.705429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.705547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.705573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.705715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.705741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.705882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.705911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.706097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.706123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.706249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.706276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.706419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.706445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.706589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.706615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.706738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.706764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.706885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.706911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.707024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.707052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.707164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.707191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.707360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.707386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.707519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.707545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.707688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.707714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.707878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.707904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.708023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.708049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.708166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.708192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.708313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.708340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.708460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.708488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.708608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.708633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.708775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.708820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.708974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.709003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.709138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.709167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.709313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.709339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.709479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.235 [2024-07-25 00:02:49.709505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.235 qpair failed and we were unable to recover it. 00:25:19.235 [2024-07-25 00:02:49.709655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.709684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.709801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.709827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.709995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.710022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.710153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.710189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.710358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.710389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.710547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.710577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.710717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.710742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.710881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.710907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.711050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.711077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.711194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.711220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.711350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.711376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.711486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.711511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.711675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.711701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.711835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.711866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.711985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.712011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.712153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.712180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.712323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.712350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.712492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.712532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.712674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.712701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.712874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.712900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.713043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.713069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.713216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.713248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.713393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.713419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.713525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.713551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.713694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.713722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.713882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.713912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.714083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.714112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.714254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.714307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.714425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.714451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.714596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.714621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.714761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.714787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.714926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.714951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.715070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.715096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.715246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.715273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.715388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.715413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.715555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.715581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.715697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.715723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.236 [2024-07-25 00:02:49.715832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.236 [2024-07-25 00:02:49.715857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.236 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.715968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.715996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.716115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.716141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.716318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.716348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.716479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.716513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.716655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.716681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.716792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.716825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.716967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.716992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.717134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.717161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.717286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.717314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.717424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.717450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.717560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.717586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.717720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.717750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.717942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.717968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.718089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.718116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.718254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.718281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.718388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.718414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.718564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.718594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.718714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.718739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.718900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.718929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.719092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.719120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.719240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.719275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.719427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.719460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.719577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.719602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.719745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.719775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.719923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.719949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.720056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.720098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.720280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.720315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.720476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.720506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.720669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.720696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.720811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.720836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.720982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.721009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.721124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.721149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.237 qpair failed and we were unable to recover it. 00:25:19.237 [2024-07-25 00:02:49.721282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.237 [2024-07-25 00:02:49.721308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.721465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.721502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.721664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.721690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.721833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.721858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.722061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.722089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.722251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.722302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.722449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.722476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.722612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.722638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.722772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.722798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.722938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.722965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.723098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.723130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.723303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.723332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.723475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.723502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.723626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.723651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.723790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.723818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.723971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.723999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.724134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.724161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.724318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.724345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.724482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.724508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.724627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.724653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.724768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.724799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.724947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.724973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.725111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.725136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.725258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.725285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.725425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.725452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.725639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.725668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.725821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.725850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.725971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.726000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.726164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.726190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.726312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.726343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.726458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.726484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.726595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.726621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.726746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.726772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.726887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.726913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.727049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.727074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.727234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.727272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.727443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.727470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.727628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.238 [2024-07-25 00:02:49.727654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.238 qpair failed and we were unable to recover it. 00:25:19.238 [2024-07-25 00:02:49.727841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.727870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.728027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.728053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.728220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.728261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.728400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.728426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.728537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.728564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.728687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.728713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.728882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.728907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.729072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.729102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.729236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.729275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.729402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.729437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.729604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.729630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.729754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.729780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.729909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.729939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.730087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.730123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.730255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.730282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.730422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.730449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.730555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.730581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.730721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.730748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.730917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.730943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.731103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.731132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.731257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.731286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.731444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.731473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.731666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.731692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.731896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.731923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.732044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.732071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.732187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.732214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.732349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.732375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.732500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.732527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.732646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.732673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.732786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.732813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.732949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.732975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.733163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.733192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.733343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.733369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.733538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.733564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.733703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.733730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.733870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.733896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.734039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.734064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.239 [2024-07-25 00:02:49.734233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.239 [2024-07-25 00:02:49.734269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.239 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.734385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.734410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.734555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.734597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.734753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.734781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.734955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.735006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.735198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.735223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.735386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.735412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.735578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.735606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.735793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.735847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.736043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.736073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.736234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.736273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.736423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.736449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.736574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.736601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.736745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.736771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.736924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.736951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.737127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.737162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.737309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.737339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.737485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.737511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.737624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.737666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.737862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.737887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.738054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.738080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.738217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.738252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.738426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.738452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.738598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.738629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.738745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.738772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.738910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.738935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.739048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.739073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.739221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.739257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.739436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.739465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.739603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.739629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.739798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.739841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.739968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.739995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.740142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.740172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.740336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.740363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.740504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.740531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.740673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.740699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.740865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.740891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.741001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.240 [2024-07-25 00:02:49.741027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.240 qpair failed and we were unable to recover it. 00:25:19.240 [2024-07-25 00:02:49.741139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.741164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.741310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.741347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.741476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.741505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.741673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.741698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.741889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.741919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.742080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.742110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.742234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.742272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.742442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.742467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.742615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.742643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.742780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.742806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.742926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.742952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.743117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.743153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.743354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.743385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.743506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.743533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.743645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.743672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.743778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.743807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.743981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.744007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.744137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.744168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.744286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.744314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.744475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.744502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.744641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.744667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.744818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.744844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.744968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.744995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.745127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.745153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.745305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.745332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.745471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.745497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.745627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.745654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.745765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.745791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.745934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.745960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.746107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.746137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.746260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.746294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.746419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.746447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.746612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.746638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.746755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.746786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.746903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.746928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.747064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.241 [2024-07-25 00:02:49.747090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.241 qpair failed and we were unable to recover it. 00:25:19.241 [2024-07-25 00:02:49.747258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.747284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.747430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.747457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.747569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.747596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.747715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.747740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.747859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.747885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.748005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.748033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.748204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.748230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.748366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.748391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.748533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.748560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.748709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.748736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.748883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.748911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.749066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.749100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.749254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.749292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.749409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.749441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.749558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.749584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.749702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.749727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.749893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.749925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.750042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.750072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.750202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.750229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.750386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.750416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.750545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.750572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.750701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.750734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.750876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.750901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.751013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.751046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.751189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.751218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.751383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.751416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.751536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.751563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.751702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.751731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.751872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.751897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.752050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.752077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.752189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.752214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.752346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.752373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.752555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.752582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.752717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.242 [2024-07-25 00:02:49.752744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.242 qpair failed and we were unable to recover it. 00:25:19.242 [2024-07-25 00:02:49.752860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.752894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.753046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.753073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.753182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.753214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.753359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.753386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.753529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.753562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.753708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.753734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.753853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.753883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.754007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.754033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.754142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.754173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.754330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.754361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.754473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.754512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.754659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.754690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.754799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.754826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.754996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.755021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.755153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.755180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.755330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.755360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.755523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.755549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.755692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.755718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.755835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.755861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.755976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.756009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.756126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.756152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.756308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.756335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.756445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.756472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.756598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.756629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.756746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.756772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.756891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.756922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.757068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.757093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.757238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.757280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.757435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.757461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.757576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.757603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.757741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.757767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.757878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.757910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.758030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.758055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.758200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.758227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.758354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.758380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.758538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.758565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.758712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.758740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.243 [2024-07-25 00:02:49.758855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.243 [2024-07-25 00:02:49.758882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.243 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.759023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.759060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.759213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.759239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.759405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.759432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.759561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.759591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.759749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.759776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.759898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.759925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.760073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.760100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.760276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.760314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.760442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.760469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.760598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.760624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.760737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.760766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.760893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.760920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.761094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.761127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.761255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.761283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.761433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.761460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.761569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.761594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.761740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.761767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.761913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.761938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.762051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.762077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.762199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.762231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.762355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.762386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.762506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.762533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.762658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.762688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.762844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.762874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.762999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.763026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.763147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.763174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.763294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.763327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.763475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.763501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.763646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.763672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.763808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.763838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.763993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.764019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.764155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.764186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.764361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.764388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.764538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.764564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.764697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.764728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.764875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.764902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.765039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.765065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.244 qpair failed and we were unable to recover it. 00:25:19.244 [2024-07-25 00:02:49.765205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.244 [2024-07-25 00:02:49.765230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.765410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.765437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.765582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.765614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.765729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.765755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.765866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.765900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.766031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.766058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.766206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.766235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.766371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.766397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.766538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.766564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.766680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.766707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.766849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.766875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.766989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.767016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.767167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.767193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.767311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.767341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.767477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.767503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.767655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.767682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.767831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.767858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.768000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.768026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.768176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.768201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.768361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.768388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.768528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.768554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.768706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.768732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.768882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.768911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.769048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.769078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.769219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.769254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.769393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.769419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.769542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.769574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.769699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.769725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.769893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.769920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.770039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.770069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.770191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.770217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.770344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.770373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.770505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.770535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.770686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.770713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.770850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.770877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.770989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.771016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.771121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.771146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.771291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.245 [2024-07-25 00:02:49.771326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.245 qpair failed and we were unable to recover it. 00:25:19.245 [2024-07-25 00:02:49.771453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.771479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.771603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.771630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.771799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.771824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.771965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.771992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.772103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.772128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.772280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.772307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.772415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.772442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.772559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.772592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.772710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.772737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.772926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.772957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.773066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.773099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.773249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.773276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.773398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.773425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.773545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.773571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.773709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.773736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.773866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.773892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.774003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.774034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.774184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.774214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.774384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.774415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.774526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.774553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.774667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.774692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.774819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.774846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.774984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.775016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.775187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.775219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.775360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.775389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.775533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.775559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.775673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.775700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.775817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.775843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.775960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.775992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.776110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.776136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.776277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.776304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.776443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.776469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.776643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.246 [2024-07-25 00:02:49.776669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.246 qpair failed and we were unable to recover it. 00:25:19.246 [2024-07-25 00:02:49.776789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.776819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.776973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.777002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.777146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.777173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.777285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.777312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.777420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.777447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.777591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.777616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.777736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.777763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.777879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.777904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.778070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.778097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.778217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.778252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.778403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.778430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.778535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.778561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.778710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.778736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.778887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.778915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.779031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.779058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.779182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.779211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.779377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.779404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.779574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.779600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.779725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.779751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.779871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.779898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.780049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.780075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.780208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.780235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.780400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.780432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.780560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.780586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.780694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.780720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.780872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.780898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.781013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.781040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.781154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.781181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.781325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.781357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.781511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.781544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.781716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.781747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.781878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.781903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.782028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.782054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.782220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.782256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.782369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.782402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.782526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.782552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.782674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.782705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.782852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.782879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.783002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.247 [2024-07-25 00:02:49.783030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.247 qpair failed and we were unable to recover it. 00:25:19.247 [2024-07-25 00:02:49.783193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.783220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.783373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.783405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.783551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.783580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.783695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.783722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.783867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.783893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.784015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.784041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.784156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.784181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.784291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.784318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.784435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.784460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.784601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.784628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.784751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.784777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.784924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.784951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.785095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.785121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.785293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.785320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.785438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.785470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.785613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.785640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.785816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.785843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.785956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.785981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.786103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.786131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.786260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.786287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.786435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.786461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.786571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.786599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.786749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.786775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.786918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.786949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.787068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.787094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.787234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.787268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.787384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.787411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.787548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.787578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.787728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.787754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.787872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.787898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.788007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.788034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.788177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.788203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.788352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.788379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.788529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.788555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.788696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.788722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.788842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.788868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.788978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.789003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.789124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.789150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.789280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.789307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.789424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.789450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.248 [2024-07-25 00:02:49.789591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.248 [2024-07-25 00:02:49.789616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.248 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.789771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.789797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.789905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.789937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.790052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.790082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.790219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.790255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.790393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.790420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.790546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.790572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.790708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.790739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.790861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.790886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.791021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.791047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.791238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.791274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.791402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.791428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.791568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.791596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.791724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.791750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.791868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.791898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.792022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.792047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.792169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.792198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.792332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.792359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.792497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.792523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.792644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.792670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.792813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.792839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.792948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.792973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.793136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.793163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.793305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.793337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.793493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.249 [2024-07-25 00:02:49.793519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.249 qpair failed and we were unable to recover it. 00:25:19.249 [2024-07-25 00:02:49.793637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.793664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.793807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.793834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.793961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.793988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.794112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.794138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.794321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.794361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.794481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.794509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.794630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.794655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.794770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.794796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.794909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.794934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.795066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.795091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.795219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e230 is same with the state(5) to be set 00:25:19.526 [2024-07-25 00:02:49.795382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.795429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.795571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.795599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.795744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.795772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.795892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.795918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.796081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.796109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.796225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.796262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.796413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.796442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.796560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.796592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.796755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.796800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.796961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.797004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.797140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.797166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.797310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.797338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.797473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.797500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.797649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.797691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.797855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.797907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.798041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.798069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.798211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.798238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.798386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.798412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.798567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.798595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.798718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.798745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.798867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.798899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.799048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.799075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.799233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.799285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.799404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.799430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.799567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.799594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.799761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.799789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.799917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.799950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.800104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.800131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.800254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.800297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.800417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.800442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.800581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.800608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.800754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.800780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.800930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.800957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.801085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.801118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.801251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.801295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.801410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.801443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.801617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.801643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.801793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.801824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.801972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.801999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.802132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.802160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.802361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.802400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.802519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.802546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.802664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.802689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.802835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.802862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.803008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.803033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.803175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.803202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.803467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.803494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.526 qpair failed and we were unable to recover it. 00:25:19.526 [2024-07-25 00:02:49.803638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.526 [2024-07-25 00:02:49.803668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.803779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.803804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.803925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.803951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.804078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.804103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.804225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.804255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.804399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.804425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.804537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.804563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.804676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.804701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.804828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.804854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.804977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.805002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.805114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.805139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.805277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.805302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.805419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.805444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.805560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.805586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.805759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.805784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.805917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.805942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.806092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.806117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.806256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.806281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.806434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.806459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.806625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.806650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.806789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.806814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.806926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.806951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.807093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.807119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.807255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.807281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.807424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.807449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.807564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.807588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.807730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.807757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.807875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.807904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.808048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.808073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.808229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.808275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.808403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.808432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.808583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.808609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.808749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.808775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.808915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.808940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.809063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.809091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.809235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.809270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.809414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.809438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.809550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.809574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.809684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.809709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.809828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.809852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.809972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.809997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.810127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.810153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.810274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.810299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.810438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.810463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.810596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.810621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.810735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.810760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.810897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.810921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.811035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.811059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.811200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.811225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.811405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.811429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.811586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.811611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.811751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.811776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.811919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.811944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.812087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.812111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.812253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.812283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.812405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.812430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.812571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.812596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.812764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.812789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.812910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.812935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.813045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.813070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.813186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.813211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.813355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.813380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.813520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.813545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.813662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.813687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.813819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.813844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.814012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.814037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.814155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.814180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.814324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.814349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.814473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.814499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.814615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.814639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.814778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.814802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.527 qpair failed and we were unable to recover it. 00:25:19.527 [2024-07-25 00:02:49.814943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.527 [2024-07-25 00:02:49.814968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.815110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.815134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.815278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.815305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.815447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.815472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.815640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.815665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.815780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.815806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.815928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.815953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.816064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.816088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.816210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.816235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.816394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.816418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.816546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.816575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.816754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.816779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.816887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.816912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.817024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.817049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.817195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.817219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.817339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.817364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.817482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.817507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.817626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.817651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.817762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.817788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.817930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.817955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.818100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.818125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.818255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.818281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.818399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.818424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.818534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.818558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.818685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.818729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.818881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.818908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.819039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.819065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.819240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.819274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.819411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.819436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.819571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.819595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.819718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.819745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.819886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.819912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.820058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.820083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.820219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.820251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.820388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.820413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.820561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.820585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.820695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.820720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.820849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.820874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.821015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.821040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.821191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.821220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.821370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.821397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.821517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.821543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.821718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.821751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.821874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.821901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.822026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.822057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.822177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.822205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.822370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.822396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.822565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.822589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.822730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.822755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.822877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.822902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.823047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.823071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.823217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.823254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.823378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.823404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.823572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.823597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.823764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.823790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.528 qpair failed and we were unable to recover it. 00:25:19.528 [2024-07-25 00:02:49.823962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.528 [2024-07-25 00:02:49.823989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.824131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.824158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.824276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.824303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.824422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.824448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.824572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.824596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.824738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.824762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.824873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.824897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.825013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.825038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.825174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.825199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.825344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.825373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.825514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.825539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.825650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.825674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.825792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.825817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.825955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.825980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.826122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.826147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.826289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.826314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.826456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.826480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.826597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.826621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.826764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.826788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.826906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.826932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.827073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.827096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.827203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.827227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.827347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.827371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.827530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.827558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.827703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.827730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.827875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.827901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.828036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.828062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.828213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.828252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.828401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.828426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.828546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.828572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.828740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.828765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.828874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.828899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.829021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.829046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.829163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.829188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.829321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.829347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.829517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.829542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.829661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.829686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.829813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.829839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.829943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.829967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.830082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.830107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.830228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.830259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.830401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.830426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.830534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.830559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.830677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.830702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.830808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.830833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.830955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.830980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.831088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.831112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.831266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.831293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.831406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.831431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.831578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.831602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.831764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.831802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.831932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.831959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.832106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.832137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.832287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.832315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.832492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.832520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.832667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.832692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.832827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.832853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.833000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.833025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.833167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.833192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.833307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.833332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.833471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.833496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.833638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.833664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.833775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.833801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.833916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.833941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.834064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.834091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.834199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.834224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.834375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.834401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.834540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.834564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.834699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.529 [2024-07-25 00:02:49.834723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.529 qpair failed and we were unable to recover it. 00:25:19.529 [2024-07-25 00:02:49.834839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.834865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.834966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.834991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.835135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.835161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.835276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.835302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.835439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.835463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.835579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.835605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.835776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.835801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.835917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.835941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.836119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.836148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.836300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.836333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.836478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.836504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.836633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.836659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.836772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.836797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.836939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.836964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.837079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.837104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.837226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.837257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.837410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.837435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.837585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.837610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.837728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.837754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.837902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.837927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.838063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.838088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.838228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.838265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.838381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.838407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.838518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.838544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.838660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.838684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.838801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.838827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.838943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.838968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.839076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.839101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.839224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.839259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.839439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.839465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.839609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.839633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.839755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.839780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.839922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.839949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.840085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.840110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.840230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.840262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.840402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.840432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.840538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.840563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.840670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.840695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.840864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.840889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.841033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.841058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.841225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.841257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.841412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.841437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.841602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.841628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.841766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.841791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.841899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.841924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.842037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.842063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.842229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.842262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.842399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.842424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.842568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.842593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.842746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.842771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.842881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.842906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.843049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.843074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.843214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.843238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.843359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.843385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.843503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.843528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.843685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.843710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.843856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.843882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.843995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.844020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.844161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.844187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.844324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.844350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.844465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.844490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.844629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.844654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.844782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.844811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.844955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.844980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.845108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.845133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.845254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.845280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.845401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.845426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.845536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.845561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.530 [2024-07-25 00:02:49.845732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.530 [2024-07-25 00:02:49.845757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.530 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.845877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.845902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.846042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.846067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.846172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.846197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.846375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.846400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.846517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.846542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.846707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.846733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.846878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.846904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.847059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.847085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.847229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.847262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.847370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.847396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.847538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.847563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.847703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.847728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.847897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.847922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.848063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.848089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.848198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.848223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.848407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.848433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.848577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.848603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.848745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.848770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.848889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.848915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.531 qpair failed and we were unable to recover it. 00:25:19.531 [2024-07-25 00:02:49.849039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.531 [2024-07-25 00:02:49.849064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.849206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.849235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.849368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.849393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.849566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.849591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.849710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.849735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.849852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.849877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.850022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.850047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.850194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.850219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.850349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.850376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.850492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.850518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.850656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.850681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.850828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.850853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.850967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.850992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.851103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.851128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.851266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.851292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.851413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.851439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.851549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.851574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.851718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.851744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.851923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.851962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.852114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.852142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.852285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.852313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.852450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.852477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.852623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.852651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.852825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.852852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.853023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.853050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.853190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.853216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.853368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.853394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.853513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.853539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.853698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.853731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.853855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.853885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.854107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.854135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.854300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.854326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.854450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.854476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.854613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.854638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.854744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.854785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.854941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.854969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.855108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.855133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.855257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.855283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.855429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.855454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.855598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.855623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.855764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.855789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.855923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.855951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.856135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.856164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.856294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.856321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.856431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.856456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.856575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.856600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.856787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.856815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.857004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.857030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.857198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.857223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.857345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.857370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.857543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.857568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.857747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.857773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.857968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.532 [2024-07-25 00:02:49.857996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.532 qpair failed and we were unable to recover it. 00:25:19.532 [2024-07-25 00:02:49.858126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.858155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.858319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.858344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.858526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.858554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.858714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.858756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.858901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.858927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.859067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.859092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.859230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.859267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.859413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.859438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.859580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.859605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.859741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.859766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.859917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.859942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.860051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.860077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.860210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.860238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.860373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.860398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.860542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.860567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.860710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.860735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.860883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.860910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.861054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.861079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.861216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.861249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.861371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.861397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.861539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.861582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.861736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.861764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.861924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.861949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.862062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.862087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.862226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.862269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.862409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.862434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.862576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.862602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.862727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.862753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.862914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.862939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.863078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.863104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.863291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.863317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.863455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.863480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.863621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.863647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.863752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.863778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.863928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.863953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.864081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.864112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.864239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.864273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.864418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.864443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.864581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.864607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.864740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.864767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.864906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.864932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.865102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.865127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.865279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.865309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.865472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.865501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.865620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.865662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.865816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.865845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.866008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.866033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.866152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.866177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.866289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.866315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.866459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.866485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.866603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.866646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.866780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.866808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.866966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.866991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.867108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.867134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.867321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.867347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.867515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.867540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.867687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.867713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.867860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.867901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.868065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.868090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.868212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.868260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.868446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.868476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.868669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.868695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.868884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.868912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.869070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.869099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.869265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.869291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.869405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.869431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.533 qpair failed and we were unable to recover it. 00:25:19.533 [2024-07-25 00:02:49.869606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.533 [2024-07-25 00:02:49.869631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.869749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.869774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.869911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.869936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.870038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.870063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.870204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.870234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.870362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.870387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.870511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.870536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.870705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.870730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.870891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.870919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.871072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.871100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.871270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.871297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.871457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.871486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.871670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.871699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.871872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.871897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.872057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.872086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.872213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.872247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.872413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.872438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.872625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.872653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.872816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.872845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.872980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.873005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.873153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.873178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.873316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.873342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.873494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.873520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.873662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.873687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.873871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.873899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.874033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.874075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.874231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.874266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.874395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.874420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.874565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.874590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.874696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.874738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.874893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.874921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.875073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.875099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.875274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.875317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.875480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.875506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.875623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.875648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.875783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.875809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.875954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.875995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.876181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.876207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.876369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.876397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.876550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.876579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.876710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.876737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.876882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.876907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.877048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.877073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.877246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.877272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.877398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.877426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.877551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.877579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.877769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.877794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.877906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.877931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.878041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.878067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.878179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.878205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.878355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.878398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.878580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.878609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.878771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.878798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.878957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.878986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.879137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.879165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.879352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.879379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.879511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.879539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.879658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.879686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.879813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.879838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.879987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.880012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.880179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.534 [2024-07-25 00:02:49.880207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.534 qpair failed and we were unable to recover it. 00:25:19.534 [2024-07-25 00:02:49.880380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.880406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.880566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.880594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.880752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.880780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.880938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.880964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.881077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.881120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.881323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.881349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.881514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.881539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.881680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.881722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.881856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.881885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.882022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.882047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.882163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.882188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.882374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.882407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.882547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.882572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.882690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.882715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.882884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.882914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.883110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.883136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.883302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.883331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.883487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.883515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.883649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.883675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.883820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.883845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.884024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.884052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.884213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.884240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.884385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.884428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.884560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.884588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.884746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.884771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.884922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.884947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.885055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.885081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.885231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.885265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.885395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.885423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.885577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.885606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.885770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.885795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.885940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.885965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.886072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.886097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.886205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.886231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.886381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.886424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.886584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.886612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.886779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.886805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.886947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.886973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.887152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.887184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.887312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.887339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.887452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.887478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.887621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.887649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.887781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.887806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.887949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.887975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.888142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.888167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.888280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.888305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.888440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.888465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.888586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.888611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.888728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.888753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.888861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.888888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.889091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.889116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.889257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.889283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.889391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.889416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.889529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.889555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.889718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.889744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.889888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.889933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.890059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.890087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.890214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.890240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.890446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.890474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.890660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.890686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.890854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.890879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.891071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.891099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.891256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.891285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.891452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.891478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.891619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.891660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.891820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.891853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.892008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.892033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.892189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.892217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.892362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.892391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.892562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.892588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.892707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.892733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.535 qpair failed and we were unable to recover it. 00:25:19.535 [2024-07-25 00:02:49.892893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.535 [2024-07-25 00:02:49.892921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.893056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.893081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.893198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.893223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.893387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.893413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.893596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.893622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.893780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.893808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.893932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.893960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.894144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.894169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.894283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.894327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.894454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.894482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.894612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.894637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.894755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.894780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.894893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.894919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.895059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.895087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.895236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.895271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.895429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.895454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.895563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.895588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.895785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.895814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.895973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.896001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.896137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.896164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.896342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.896368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.896511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.896537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.896727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.896752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.896918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.896943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.897077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.897102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.897269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.897295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.897416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.897442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.897641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.897669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.897854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.897881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.897990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.898032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.898198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.898223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.898406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.898432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.898585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.898614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.898803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.898831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.898969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.898995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.899138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.899163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.899314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.899340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.899477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.899502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.899617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.899642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.899761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.899786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.899928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.899953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.900105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.900133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.900283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.900312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.900469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.900494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.900635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.900661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.900802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.900830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.901029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.901054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.901216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.901249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.901405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.901433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.901621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.901647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.901788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.901814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.901959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.902001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.902143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.902168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.902306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.902331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.902472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.902497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.902615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.902641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.902748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.902773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.902915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.902943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.903098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.903125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.903292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.903318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.903462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.903487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.903687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.903712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.903832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.903861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.903997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.904023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.904194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.904219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.904368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.904393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.904556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.904584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.904725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.536 [2024-07-25 00:02:49.904750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.536 qpair failed and we were unable to recover it. 00:25:19.536 [2024-07-25 00:02:49.904865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.904890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.905055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.905083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.905246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.905272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.905383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.905423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.905589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.905613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.905723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.905748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.905891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.905931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.906077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.906102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.906263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.906289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.906429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.906472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.906655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.906683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.906807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.906832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.906978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.907003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.907169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.907194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.907335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.907360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.907522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.907550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.907702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.907729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.907911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.907937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.908065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.908093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.908232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.908267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.908461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.908486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.908604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.908653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.908844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.908872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.909003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.909028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.909141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.909166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.909321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.909350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.909499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.909525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.909651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.909676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.909816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.909845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.909992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.910018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.910156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.910197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.910364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.910390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.910505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.910530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.910652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.910677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.910803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.910831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.910998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.911024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.911189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.911214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.911388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.911414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.911584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.911609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.911767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.911796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.911951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.911979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.912117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.912144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.912261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.912287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.912467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.912496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.912660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.912686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.912873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.912901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.913020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.913048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.913192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.913217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.913332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.913358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.913509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.913551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.913687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.913712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.913844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.913869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.914037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.914067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.914198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.914224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.914391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.914435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.914555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.914583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.914743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.914769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.914889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.914929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.915080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.915108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.915248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.915274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.915412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.915437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.915610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.915638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.915807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.537 [2024-07-25 00:02:49.915833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.537 qpair failed and we were unable to recover it. 00:25:19.537 [2024-07-25 00:02:49.915995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.916023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.916151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.916179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.916304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.916330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.916499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.916542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.916661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.916689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.916833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.916858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.916979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.917004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.917144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.917174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.917350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.917376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.917542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.917571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.917703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.917732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.917924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.917949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.918081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.918111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.918276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.918304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.918456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.918481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.918597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.918639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.918761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.918789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.918924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.918949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.919117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.919143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.919349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.919374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.919515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.919540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.919694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.919722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.919870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.919898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.920047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.920075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.920209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.920237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.920383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.920410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.920545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.920574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.920758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.920787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.920910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.920938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.921099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.921124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.921230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.921263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.921432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.921460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.921647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.921673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.921826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.921853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.921978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.922006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.922154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.922180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.922290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.922316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.922429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.922455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.922600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.922625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.922740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.922766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.922914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.922940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.923114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.923139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.923294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.923323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.923442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.923470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.923639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.923664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.923783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.923808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.923925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.923950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.924088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.924113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.924224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.924272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.924434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.924462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.924622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.924649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.924791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.924816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.924938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.924963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.925078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.925107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.925279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.925320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.925472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.925500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.925692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.925717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.925863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.925888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.926006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.926031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.926170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.926195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.926333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.926360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.926478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.926504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.926654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.926679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.926797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.926822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.927012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.927041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.927197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.927227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.927421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.927447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.927636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.927664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.927796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.927821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.927930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.927955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.928108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.538 [2024-07-25 00:02:49.928136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.538 qpair failed and we were unable to recover it. 00:25:19.538 [2024-07-25 00:02:49.928325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.928352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.928471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.928496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.928613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.928638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.928750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.928775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.928929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.928954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.929116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.929144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.929278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.929305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.929420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.929445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.929628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.929656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.929815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.929844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.929992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.930032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.930158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.930186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.930373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.930399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.930562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.930590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.930712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.930741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.930932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.930957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.931119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.931147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.931296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.931325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.931466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.931491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.931660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.931703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.931827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.931855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.932041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.932067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.932222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.932258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.932411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.932440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.932575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.932600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.932717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.932742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.932880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.932908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.933071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.933096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.933236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.933289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.933442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.933470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.933633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.933658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.933812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.933839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.933995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.934023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.934289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.934315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.934427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.934452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.934627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.934656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.934810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.934835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.934984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.935027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.935217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.935248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.935360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.935385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.935528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.935553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.935688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.935714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.935906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.935931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.936078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.936104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.936216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.936247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.936458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.936484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.936674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.936702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.936830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.936860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.937022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.937047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.937168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.937209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.937398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.937430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.937629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.937654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.937787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.937812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.937975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.938003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.938159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.938184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.938309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.938335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.938476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.938504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.938676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.938702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.938846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.938871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.939008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.939033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.939147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.939172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.939286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.939312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.939432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.539 [2024-07-25 00:02:49.939457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.539 qpair failed and we were unable to recover it. 00:25:19.539 [2024-07-25 00:02:49.939577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.939603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.939743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.939769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.939906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.939932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.940073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.940098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.940277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.940303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.940421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.940447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.940559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.940584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.940723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.940748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.940859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.940884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.941025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.941051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.941170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.941196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.941319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.941344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.941459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.941484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.941589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.941614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.941725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.941755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.941912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.941937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.942074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.942100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.942216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.942255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.942374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.942400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.942510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.942535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.942643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.942668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.942780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.942805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.942914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.942940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.943064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.943090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.943261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.943305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.943476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.943501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.943619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.943645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.943768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.943794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.943910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.943936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.944074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.944099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.944216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.944250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.944393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.944419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.944529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.944554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.944725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.944750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.944889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.944914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.945035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.945060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.945171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.945196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.945316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.945342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.945483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.945509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.945646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.945671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.945786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.945812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.945927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.945956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.946100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.946126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.946272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.946298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.946469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.946495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.946613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.946639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.946777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.946803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.946934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.946960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.947077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.947103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.947219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.947254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.947372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.947398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.947515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.947540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.947678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.947703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.947846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.947871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.947984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.948011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.948129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.948155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.948283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.948311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.948463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.948488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.948629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.948655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.948801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.948828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.948968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.948993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.949133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.949159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.949276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.949301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.949415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.949440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.949561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.949586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.949695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.949721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.949833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.949858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.949969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.949994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.950114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.540 [2024-07-25 00:02:49.950139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.540 qpair failed and we were unable to recover it. 00:25:19.540 [2024-07-25 00:02:49.950284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.950310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.950449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.950474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.950610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.950636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.950793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.950818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.950930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.950956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.951094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.951120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.951233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.951267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.951410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.951435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.951593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.951622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.951782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.951807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.951944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.951969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.952070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.952095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.952205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.952230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.952371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.952409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.952548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.952575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.952731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.952758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.952894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.952936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.953089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.953121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.953264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.953292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.953438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.953465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.953633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.953658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.953779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.953804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.953921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.953946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.954054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.954097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.954255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.954299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.954437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.954463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.954603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.954629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.954754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.954779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.954925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.954950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.955065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.955090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.955261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.955304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.955420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.955446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.955622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.955650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.955812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.955837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.955974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.955999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.956120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.956145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.956282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.956308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.956444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.956468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.956584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.956609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.956747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.956773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.956887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.956916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.957054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.957081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.957228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.957262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.957388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.957418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.957582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.957608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.957728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.957755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.957878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.957905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.958055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.958082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.958196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.958222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.958354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.958383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.958552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.958581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.958735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.958761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.958924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.958950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.959064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.959090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.959230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.959275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.959404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.959430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.959599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.959624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.959740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.959765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.959897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.959923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.960095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.960121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.960240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.960274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.960397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.960422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.960526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.960552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.960715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.960740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.960880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.960906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.961017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.961042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.961185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.961210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.961367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.961397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.961538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.961564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.541 [2024-07-25 00:02:49.961704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.541 [2024-07-25 00:02:49.961730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.541 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.961944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.961999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.962143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.962171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.962339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.962365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.962507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.962534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.962703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.962730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.962864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.962889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.963037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.963063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.963203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.963228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.963354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.963379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.963512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.963537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.963682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.963707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.963876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.963901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.964027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.964055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.964172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.964202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.964369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.964396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.964511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.964537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.964667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.964692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.964809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.964834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.964960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.964987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.965099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.965124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.965247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.965275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.965402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.965430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.965570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.965596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.965714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.965740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.965886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.965912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.966048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.966074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.966213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.966238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.966382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.966407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.966524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.966549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.966656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.966681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.966821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.966848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.966984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.967010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.967151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.967176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.967332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.967361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.967484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.967510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.967653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.967680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.967827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.967853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.967977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.968014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.968141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.968172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.968301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.968327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.968462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.968487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.968599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.968624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.968768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.968793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.968932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.968957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.969099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.969124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.969235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.969269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.969416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.969442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.969596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.969621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.969766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.969792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.969928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.969954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.970061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.970087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.970208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.970234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.970347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.970373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.970514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.970540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.970651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.970677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.970791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.970818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.970937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.970963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.971109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.971136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.971267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.971294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.542 [2024-07-25 00:02:49.971403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.542 [2024-07-25 00:02:49.971429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.542 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.971536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.971562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.971690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.971716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.971857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.971883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.972007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.972051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.972184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.972210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.972342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.972368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.972483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.972509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.972652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.972677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.972798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.972824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.972931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.972956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.973099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.973125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.973253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.973280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.973426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.973451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.973616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.973642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.973785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.973812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.973924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.973950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.974069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.974095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.974212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.974238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.974431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.974458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.974594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.974620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.974763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.974789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.974906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.974931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.975101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.975126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.975275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.975301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.975411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.975436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.975573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.975598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.975769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.975795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.975905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.975931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.976044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.976069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.976214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.976239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.976393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.976420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.976557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.976583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.976726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.976752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.976884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.976910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.977031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.977056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.977174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.977200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.977373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.977399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.977510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.977535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.977658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.977684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.977826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.977851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.978016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.978042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.978204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.978249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.978376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.978404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.978558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.978585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.978728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.978756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.978924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.978972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.979107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.979136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.979285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.979312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.979435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.979462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.979578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.979603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.979722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.979747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.979868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.979893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.980027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.980053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.980191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.980216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.980386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.980413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.980557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.980583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.980700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.980725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.980831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.980857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.980971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.980996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.981149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.981174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.981288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.981314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.981467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.981493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.981606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.981632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.981771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.981797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.981932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.981957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.982103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.982129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.982269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.982303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.982456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.982482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.982650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.982675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.982793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.982819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.982956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.543 [2024-07-25 00:02:49.982982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.543 qpair failed and we were unable to recover it. 00:25:19.543 [2024-07-25 00:02:49.983095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.983122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.983266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.983297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.983408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.983434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.983547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.983572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.983690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.983716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.983834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.983859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.983996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.984021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.984156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.984183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.984316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.984342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.984498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.984524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.984636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.984662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.984779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.984804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.984959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.984985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.985132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.985158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.985287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.985313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.985455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.985480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.985620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.985645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.985760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.985786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.985924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.985949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.986093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.986119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.986222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.986254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.986370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.986396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.986548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.986574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.986729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.986758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.986889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.986917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.987046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.987075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.987202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.987228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.987360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.987385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.987529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.987559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.987691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.987719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.987875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.987902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.988041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.988084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.988252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.988279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.988430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.988455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.988608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.988634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.988786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.988814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.988946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.988975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.989136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.989165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.989307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.989334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.989479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.989504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.989641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.989667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.989827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.989855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.989997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.990025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.990162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.990191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.990385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.990411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.990564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.990589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.990758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.990784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.990949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.990977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.991131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.991160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.991343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.991369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.991513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.991538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.991707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.991732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.991880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.991906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.992057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.992082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.992209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.992237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.992443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.992473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.992582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.992608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.992749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.992775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.992905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.992930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.993041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.993066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.993209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.993235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.993373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.993399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.993537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.993562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.993674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.993700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.993822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.993848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.993984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.994010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.994126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.994151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.994298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.994325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.544 [2024-07-25 00:02:49.994439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.544 [2024-07-25 00:02:49.994464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.544 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.994592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.994630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.994784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.994811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.994931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.994957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.995072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.995098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.995248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.995278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.995453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.995479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.995615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.995641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.995751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.995777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.995894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.995920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.996042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.996088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.996261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.996304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.996473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.996499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.996679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.996708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.996868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.996902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.997061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.997088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.997230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.997268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.997421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.997447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.997566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.997592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.997715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.997755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.997949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.997978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.998188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.998217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.998384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.998422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.998596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.998626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.998829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.998855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.998977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.999003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.999149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.999175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.999326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.999353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.999496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.999522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.999696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.999740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:49.999901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:49.999926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.000048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.000074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.000194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.000220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.000387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.000413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.000566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.000594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.000754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.000783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.000918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.000944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.001062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.001089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.001235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.001272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.001411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.001437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.001548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.001575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.001758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.001793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.001967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.001992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.002115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.002142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.002292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.002336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.002509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.002535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.002651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.002677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.002821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.002846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.003019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.003045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.003239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.003276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.003415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.003441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.003560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.003587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.003690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.003715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.003856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.003885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.004067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.004095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.004260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.004311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.004424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.004450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.004623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.004649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.004785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.004813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.004973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.005003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.545 qpair failed and we were unable to recover it. 00:25:19.545 [2024-07-25 00:02:50.005147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.545 [2024-07-25 00:02:50.005173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.005343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.005387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.005534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.005563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.005728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.005754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.005892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.005936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.006121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.006150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.006286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.006313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.006459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.006485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.006636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.006680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.006866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.006892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.007009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.007053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.007190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.007219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.007400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.007426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.007626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.007655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.007811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.007840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.007968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.007994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.008105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.008131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.008306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.008336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.008473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.008499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.008637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.008681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.008870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.008899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.009060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.009090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.009234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.009264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.009414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.009443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.009623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.009648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.009767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.009792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.009961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.009991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.010122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.010149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.010274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.010310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.010479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.010518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.010705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.010735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.010853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.010880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.011053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.011084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.011232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.011267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.011414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.011459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.011631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.011662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.011801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.011828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.011974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.012000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.012167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.012198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.012347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.012375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.012488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.012515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.012690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.012721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.012881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.012909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.013050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.013078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.013257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.013285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.013415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.013443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.013608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.013638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.014256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.014288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.014485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.014513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.014703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.014734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.014878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.014908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.018255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.018298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.018438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.018469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.018641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.018672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.018827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.018855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.019008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.019052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.019206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.019236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.019419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.019447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.019570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.019614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.019782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.019811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.019980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.020008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.020128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.020176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.020369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.020400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.020570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.020598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.020724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.020751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.020899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.020926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.021097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.021124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.021269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.021297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.021487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.021516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.021635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.021662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.021811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.546 [2024-07-25 00:02:50.021855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.546 qpair failed and we were unable to recover it. 00:25:19.546 [2024-07-25 00:02:50.022016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.022046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.022211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.022239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.022379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.022406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.022546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.022572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.022768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.022795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.022926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.022969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.023131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.023160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.023306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.023333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.023484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.023512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.023637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.023664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.023858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.023885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.024024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.024055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.024258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.024307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.025258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.025307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.025470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.025501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.025661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.025699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.025857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.025899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.026067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.026108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.026292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.026328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.026479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.026514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.026655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.026690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.026849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.026885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.027054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.027093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.027252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.027282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.027444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.027470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.027616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.027643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.027782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.027809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.027955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.027980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.028106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.028132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.028254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.028281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.028420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.028450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.028593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.028618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.028737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.028764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.028908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.028934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.029068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.029093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.029265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.029292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.029433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.029459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.029605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.029631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.029800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.029826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.029976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.030003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.030172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.030198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.030341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.030368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.030482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.030508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.030648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.030675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.030842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.030868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.031004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.031029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.031194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.031219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.031343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.031369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.031538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.031564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.031724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.031749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.031887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.031913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.032053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.032078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.032215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.032247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.032374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.032399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.032521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.032547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.032691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.032717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.032863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.032890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.033059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.033086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.033227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.033259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.033402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.033427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.033549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.033574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.547 [2024-07-25 00:02:50.033714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.547 [2024-07-25 00:02:50.033740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.547 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.033847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.033873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.033991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.034016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.034129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.034155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.034259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.034285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.034429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.034455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.034600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.034626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.034768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.034794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.034965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.034990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.035096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.035125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.035301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.035327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.035468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.035493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.035635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.035661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.035771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.035797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.035943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.035969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.036114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.036139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.036280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.036307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.036421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.036448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.036571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.036597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.036740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.036765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.036891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.036916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.037058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.037085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.037199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.037226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.037397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.037424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.037569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.037596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.037737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.037765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.037909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.037935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.038080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.038107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.038259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.038286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.038431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.038456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.038564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.038590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.038711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.038738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.038893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.038918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.039027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.039053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.039170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.039196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.039343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.039369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.039536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.039579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.039772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.039808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.039972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.040006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.040176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.040209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.040387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.040421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.040585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.040617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.040760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.040792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.041048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.041084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.041218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.041262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.041436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.041470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.041641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.041675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.041812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.041846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.041992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.042029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.042220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.042268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.042451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.042486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.042664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.042696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.042882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.042915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.043076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.043110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.043249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.043287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.043436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.043463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.043645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.043671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.043787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.043813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.043981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.044007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.044151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.044176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.044298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.044324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.044468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.044494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.044634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.044660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.044779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.044805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.044944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.044969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.045074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.045100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.045255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.045282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.045428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.045454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.045617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.045643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.045762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.548 [2024-07-25 00:02:50.045787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.548 qpair failed and we were unable to recover it. 00:25:19.548 [2024-07-25 00:02:50.045907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.045932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.046049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.046074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.046187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.046214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.046379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.046405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.046523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.046549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.046666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.046692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.046825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.046851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.046993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.047018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.047129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.047156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.047324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.047350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.047494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.047519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.047640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.047666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.047777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.047803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.047942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.047967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.048088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.048113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.048260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.048294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.048437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.048463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.048604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.048630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.048742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.048769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.048880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.048909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.049052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.049077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.049221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.049253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.049382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.049407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.049542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.049568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.049678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.049704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.049820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.049845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.049957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.049983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.050132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.050157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.050261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.050291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.050431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.050458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.050613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.050639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.050779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.050805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.050941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.050967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.051140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.051165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.051341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.051367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.051504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.051529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.051671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.051696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.051838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.051864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.051990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.052016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.052156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.052181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.052312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.052338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.052504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.052529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.052696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.052721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.052831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.052856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.053011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.053036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.053172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.053198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.053381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.053408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.053552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.053578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.053726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.053752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.053865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.053891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.054031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.054057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.054194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.054220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.054355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.054383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.054560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.054585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.054752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.054777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.054895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.054920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.055062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.055087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.055259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.055286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.055402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.055428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.055547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.055576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.055717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.055743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.055854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.055880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.056019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.056045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.056158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.056183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.056317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.056343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.056458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.056483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.056655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.056680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.056789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.056814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.056931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.056957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.057079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.057105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.057215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.549 [2024-07-25 00:02:50.057248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.549 qpair failed and we were unable to recover it. 00:25:19.549 [2024-07-25 00:02:50.057406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.057432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.057573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.057598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.057745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.057770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.057908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.057934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.058098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.058124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.058265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.058298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.058467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.058492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.058656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.058682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.058850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.058875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.059015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.059042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.059157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.059184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.059362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.059389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.059507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.059532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.059657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.059683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.059828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.059854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.059969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.059996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.060165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.060190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.060329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.060356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.060496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.060522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.060661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.060686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.060825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.060850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.060958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.060984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.061095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.061120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.061293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.061319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.061438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.061463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.061601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.061627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.061777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.061803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.061942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.061968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.062107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.062137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.062287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.062313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.062454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.062479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.062623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.062649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.062787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.062813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.062969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.062995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.063112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.063138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.063283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.063319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.063433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.063458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.063641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.063667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.063785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.063810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.063953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.063978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.064095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.064120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.064270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.064296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.064423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.064449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.064616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.064641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.064779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.064805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.064948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.064974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.065093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.065118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.065256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.065281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.065435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.065461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.065570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.065597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.065748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.065774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.065885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.065910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.066051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.066076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.066216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.066248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.066420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.066445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.066590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.066620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.066765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.066791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.066966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.066991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.067102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.067129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.067276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.067302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.067455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.067482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.550 [2024-07-25 00:02:50.067628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.550 [2024-07-25 00:02:50.067653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.550 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.067796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.067821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.067957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.067982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.068121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.068146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.068312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.068338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.068481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.068506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.068668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.068694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.068859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.068884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.069026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.069051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.069181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.069207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.069344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.069369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.069478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.069504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.069650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.069675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.069788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.069814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.069983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.070008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.070146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.070172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.070327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.070352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.070519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.070544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.070662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.070689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.070833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.070859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.070979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.071004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.071119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.071146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.071262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.071299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.071450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.071476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.071620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.071645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.071781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.071806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.071924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.071950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.072062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.072087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.072232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.072271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.072416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.072442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.072598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.072623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.072773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.072798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.072941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.072968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.073084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.073110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.073255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.073285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.073407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.073432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.073566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.073591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.073694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.073719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.073835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.073860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.073991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.074016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.074133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.074159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.074280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.074306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.074449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.074475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.074616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.074642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.074777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.074802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.074944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.074970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.075111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.075136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.075250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.075276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.075435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.075461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.075598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.075623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.075766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.075792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.075931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.075956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.076074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.076099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.076227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.076262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.076388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.076413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.076556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.076582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.076724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.076749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.076855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.076880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.077083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.077125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.077274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.077303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.077426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.077453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.077599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.077625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.077769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.077795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.077935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.077961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.078109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.078137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.078263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.078289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.078430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.078456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.078596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.078621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.078771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.078797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.078916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.551 [2024-07-25 00:02:50.078942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.551 qpair failed and we were unable to recover it. 00:25:19.551 [2024-07-25 00:02:50.079059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.079087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.079227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.079259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.079411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.079436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.079552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.079577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.079691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.079721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.079825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.079851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.079961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.079986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.080096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.080121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.080230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.080262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.080416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.080441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.080585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.080610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.080732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.080757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.080900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.080925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.081037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.081063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.081200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.081225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.081355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.081382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.081530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.081556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.081726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.081751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.081880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.081906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.082026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.082051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.082164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.082190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.082331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.082358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.082511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.082536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.082667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.082693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.082833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.082858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.082975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.082999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.083110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.083136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.083255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.083283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.083426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.083451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.083624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.083649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.083796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.083821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.083964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.083993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.084109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.084135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.084291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.084329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.084477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.084504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.084652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.084679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.084824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.084851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.084966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.084992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.085137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.085162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.085284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.085311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.085479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.085505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.085622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.085648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.085757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.085782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.085949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.085975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.086118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.086144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.086264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.086291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.086405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.086431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.086555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.086580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.086719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.086745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.086864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.086889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.086998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.087023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.087137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.087163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.087303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.087329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.087478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.087504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.087613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.087638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.087750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.087775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.087915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.087940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.088061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.088086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.088202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.088232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.088358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.088384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.088525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.088550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.088662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.088687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.088852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.088877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.088989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.552 [2024-07-25 00:02:50.089014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.552 qpair failed and we were unable to recover it. 00:25:19.552 [2024-07-25 00:02:50.089157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.089182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.089296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.089322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.089445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.089471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.089617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.089644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.089791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.089816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.089964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.089989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.090132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.090157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.090294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.090320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.090499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.090525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.090666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.090692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.090832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.090857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.091001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.091027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.091137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.091162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.091333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.091359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.091470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.091496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.091640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.091665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.091832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.091857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.091981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.092006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.092124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.092151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.092321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.092374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.092492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.092517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.092634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.092665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.092810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.092836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.092980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.093005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.093153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.093178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.093326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.093352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.093461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.093487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.093666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.093691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.093834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.093860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.093967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.093992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.094170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.094195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.094312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.094338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.094461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.094487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.094630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.094656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.094763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.094788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.094930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.094956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.095095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.095121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.095266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.095292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.095413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.095438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.095582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.095607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.095780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.095806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.095954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.095980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.096094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.096119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.096248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.096274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.096389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.096415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.096529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.096554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.096696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.096722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.096859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.096885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.096994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.097019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.097200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.097225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.097399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.097424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.097567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.097592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.553 qpair failed and we were unable to recover it. 00:25:19.553 [2024-07-25 00:02:50.097732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.553 [2024-07-25 00:02:50.097757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.097895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.097920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.098061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.098086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.098228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.098258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.098431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.098457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.098603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.098628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.098743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.098769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.098914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.098939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.099063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.099089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.099203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.099228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.099426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.099464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.099583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.099609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.099780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.099806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.099958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.099983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.100111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.100136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.100284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.100311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.100428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.100455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.100596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.100621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.100763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.100788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.100932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.100958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.101098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.101124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.101247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.101273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.101384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.101410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.101549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.101574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.101736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.101762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.101878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.101903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.102072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.102097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.102215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.102246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.102414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.102439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.102577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.102602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.102742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.102768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.102939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.102964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.103106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.103133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.103254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.103281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.103426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.103451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.103590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.103616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.103786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.103812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.103957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.103986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.104157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.104183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.104342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.104369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.104485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.104510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.104678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.104703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.104806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.104831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.104934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.104959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.105070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.105095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.105227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.105258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.105396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.105422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.105563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.105588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.105731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.105756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.105898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.105925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.106038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.106064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.106202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.106227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.106378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.106403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.106544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.106569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.106706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.106732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.106877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.106903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.107067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.107092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.107231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.107274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.107392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.107417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.107526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.554 [2024-07-25 00:02:50.107551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.554 qpair failed and we were unable to recover it. 00:25:19.554 [2024-07-25 00:02:50.107697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.107722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.107837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.107862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.107976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.108002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.108140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.108165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.108308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.108338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.108485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.108510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.108636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.108661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.108781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.108807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.108961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.108987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.109104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.109131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.109275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.109301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.109421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.109447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.109613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.109638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.109807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.109832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.109964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.109989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.110099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.110125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.110265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.110292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.110428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.110453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.110599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.110625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.110739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.110764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.110903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.110928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.111039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.111064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.111177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.111202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.111380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.111407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.111526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.111552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.111690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.111715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.111850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.111875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.112016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.112041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.112163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.112188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.112333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.112360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.112503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.112529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.112649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.112679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.112796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.112822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.112971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.112996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.113107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.113132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.113274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.113299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.113408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.113433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.113578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.113603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.113746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.113771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.113914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.113939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.114081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.114106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.114252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.114279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.114423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.114448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.114589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.114615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.114744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.114769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.114921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.114946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.115057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.115082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.115228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.115260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.115423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.115448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.115602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.115628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.115768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.115793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.115936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.115962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.116073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.116098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.116221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.116252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.555 qpair failed and we were unable to recover it. 00:25:19.555 [2024-07-25 00:02:50.116364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.555 [2024-07-25 00:02:50.116390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.116501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.116527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.116665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.116690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.116837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.116862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.116976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.117001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.117147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.117173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.117316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.117342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.117459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.117484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.117603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.117628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.117763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.117788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.117928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.117953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.118092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.118118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.118231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.118263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.118372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.118397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.118523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.118548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.118681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.118706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.118846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.118871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.119011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.119036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.119151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.119177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.119320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.119346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.119487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.119512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.119657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.119682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.119819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.119845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.119956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.119981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.120153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.120178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.120315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.120341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.120455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.120480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.120589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.120614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.120721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.120747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.120885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.120910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.121047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.121073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.121196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.121221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.556 [2024-07-25 00:02:50.121373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.556 [2024-07-25 00:02:50.121399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.556 qpair failed and we were unable to recover it. 00:25:19.839 [2024-07-25 00:02:50.121540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.839 [2024-07-25 00:02:50.121565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.839 qpair failed and we were unable to recover it. 00:25:19.839 [2024-07-25 00:02:50.121736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.839 [2024-07-25 00:02:50.121762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.839 qpair failed and we were unable to recover it. 00:25:19.839 [2024-07-25 00:02:50.121910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.839 [2024-07-25 00:02:50.121936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.839 qpair failed and we were unable to recover it. 00:25:19.839 [2024-07-25 00:02:50.122082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.122107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.122222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.122255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.122376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.122401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.122522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.122548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.122690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.122715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.122824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.122849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.122951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.122977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.123119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.123145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.123266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.123292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.123407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.123438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.123579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.123604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.123771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.123796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.123919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.123944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.124083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.124108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.124263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.124289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.124399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.124425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.124554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.124579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.124723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.124749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.124889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.124915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.125023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.125048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.125166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.125191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.125292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.125318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.125461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.125487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.125607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.125632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.125784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.125810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.125951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.125977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.840 [2024-07-25 00:02:50.126118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.840 [2024-07-25 00:02:50.126143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.840 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.126274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.126301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.126443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.126469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.126610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.126636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.126776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.126801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.126922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.126948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.127091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.127117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.127258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.127284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.127400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.127425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.127536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.127563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.127735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.127765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.127907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.127933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.128050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.128076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.128190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.128215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.128351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.128378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.128518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.128544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.128706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.128731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.128845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.128871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.129016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.129042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.129155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.129181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.129324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.129350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.129463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.129489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.129625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.129651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.129813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.129839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.129957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.129984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.130127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.130152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.130294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.130320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.841 qpair failed and we were unable to recover it. 00:25:19.841 [2024-07-25 00:02:50.130466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.841 [2024-07-25 00:02:50.130491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.130633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.130658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.130767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.130794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.130941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.130967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.131116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.131142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.131282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.131308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.131462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.131487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.131607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.131632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.131801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.131826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.131964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.131989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.132101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.132126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.132302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.132328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.132465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.132491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.132647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.132672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.132809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.132836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.132946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.132971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.133109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.133134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.133290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.133316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.133453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.133479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.133614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.133639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.133808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.133833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.134008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.134033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.134170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.134195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.134347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.134373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.134487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.134513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.134676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.134701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.134822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.134848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.842 qpair failed and we were unable to recover it. 00:25:19.842 [2024-07-25 00:02:50.134965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.842 [2024-07-25 00:02:50.134992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.135110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.135135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.135301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.135328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.135470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.135496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.135629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.135654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.135785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.135810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.135926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.135952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.136115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.136140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.136290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.136316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.136449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.136474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.136625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.136650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.136792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.136817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.136932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.136958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.137107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.137132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.137249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.137274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.137441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.137466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.137604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.137629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.137745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.137770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.137879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.137905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.138039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.138065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.138186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.138212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.138382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.138408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.138559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.138585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.138722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.138747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.138886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.138916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.139064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.139089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.843 qpair failed and we were unable to recover it. 00:25:19.843 [2024-07-25 00:02:50.139196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.843 [2024-07-25 00:02:50.139222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.139392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.139436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.139608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.139643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.139779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.139813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.139961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.139987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.140100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.140126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.140278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.140304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.140446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.140471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.140642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.140667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.140790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.140815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.140994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.141019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.141189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.141214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.141369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.141395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.141533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.141558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.141705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.141730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.141873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.141899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.142037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.142063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.142201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.142226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.142347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.142373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.142494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.142519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.142687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.142712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.142827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.142853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.142982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.143007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.143142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.143167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.143304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.143330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.143441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.143472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.143580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.143605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.844 qpair failed and we were unable to recover it. 00:25:19.844 [2024-07-25 00:02:50.143773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.844 [2024-07-25 00:02:50.143798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.143939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.143964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.144071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.144096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.144267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.144293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.144411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.144436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.144546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.144571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.144687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.144712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.144851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.144876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.145020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.145046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.145162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.145187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.145334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.145360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.145500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.145526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.145671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.145696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.145812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.145837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.145972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.145997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.146134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.146159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.146281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.146307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.146428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.146454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.146593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.146619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.146760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.146785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.146952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.146977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.147100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.147127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.147248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.147275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.147444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.147469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.147580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.147605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.147713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.147742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.147883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.147908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.845 qpair failed and we were unable to recover it. 00:25:19.845 [2024-07-25 00:02:50.148054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.845 [2024-07-25 00:02:50.148079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.148216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.148256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.148370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.148395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.148535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.148560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.148704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.148730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.148854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.148880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.149046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.149071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.149193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.149219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.149370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.149396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.149539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.149565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.149707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.149732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.149841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.149867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.150012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.150039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.150148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.150173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.150312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.150339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.150458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.150483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.150592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.150617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.150759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.150785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.150945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.150971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.151112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.151137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.846 qpair failed and we were unable to recover it. 00:25:19.846 [2024-07-25 00:02:50.151283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.846 [2024-07-25 00:02:50.151309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.151477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.151502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.151643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.151668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.151787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.151813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.151950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.151975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.152084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.152110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.152268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.152293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.152435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.152460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.152603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.152628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.152768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.152793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.152913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.152940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.153096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.153122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.153237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.153269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.153404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.153430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.153563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.153589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.153729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.153754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.153887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.153912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.154025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.154050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.154200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.154226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.154354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.154380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.154533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.154558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.154700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.154725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.154841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.154867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.154990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.155015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.155154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.155179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.155320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.155346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.155488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.847 [2024-07-25 00:02:50.155513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.847 qpair failed and we were unable to recover it. 00:25:19.847 [2024-07-25 00:02:50.155626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.155652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.155819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.155844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.155979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.156004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.156173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.156198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.156323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.156349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.156487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.156513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.156653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.156679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.156833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.156859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.157009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.157034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.157151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.157176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.157287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.157313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.157433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.157458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.157576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.157602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.157767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.157792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.157930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.157956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.158131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.158156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.158263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.158289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.158441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.158466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.158616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.158642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.158777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.158806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.158974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.159000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.159143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.159169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.159315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.159341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.159452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.159478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.159587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.159613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.159764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.159789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.159924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.848 [2024-07-25 00:02:50.159949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.848 qpair failed and we were unable to recover it. 00:25:19.848 [2024-07-25 00:02:50.160114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.160140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.160308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.160334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.160467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.160493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.160661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.160686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.160823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.160848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.160992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.161017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.161129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.161155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.161274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.161300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.161469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.161506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.161627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.161652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.161771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.161796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.161962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.161988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.162099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.162124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.162238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.162278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.162422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.162448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.162589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.162614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.162751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.162777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.162931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.162957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.163088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.163114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.163288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.163319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.163434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.163460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.163630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.163655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.163824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.163850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.164017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.164043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.164161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.164192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.164329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.164355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.164487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.849 [2024-07-25 00:02:50.164513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.849 qpair failed and we were unable to recover it. 00:25:19.849 [2024-07-25 00:02:50.164679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.164705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.164821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.164847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.164983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.165009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.165142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.165167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.165286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.165312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.165454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.165479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.165651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.165677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.165792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.165817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.165931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.165956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.166060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.166085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.166227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.166261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.166407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.166433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.166559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.166584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.166778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.166803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.166934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.166959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.167121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.167147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.167289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.167315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.167459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.167484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.167623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.167649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.167762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.167792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.167938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.167963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.168101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.168127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.168238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.168276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.168398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.168423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.168539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.168564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.168726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.168751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.168867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.850 [2024-07-25 00:02:50.168893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.850 qpair failed and we were unable to recover it. 00:25:19.850 [2024-07-25 00:02:50.169067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.169092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.169200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.169226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.169354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.169380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.169517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.169543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.169677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.169703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.169844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.169870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.170016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.170041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.170160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.170186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.170346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.170372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.170478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.170504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.170647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.170672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.170775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.170801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.170926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.170951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.171065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.171092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.171236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.171271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.171408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.171433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.171579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.171604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.171718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.171743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.171903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.171929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.172062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.172088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.172197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.172223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.172386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.172412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.172587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.172612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.172727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.172752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.172889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.172915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.173083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.173109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.173220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.851 [2024-07-25 00:02:50.173253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.851 qpair failed and we were unable to recover it. 00:25:19.851 [2024-07-25 00:02:50.173418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.173443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.173549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.173575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.173729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.173755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.173897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.173922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.174034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.174071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.174211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.174236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.174410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.174436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.174568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.174594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.174711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.174738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.174904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.174930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.175071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.175098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.175269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.175296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.175439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.175466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.175614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.175640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.175783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.175809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.175925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.175950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.176118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.176143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.176269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.176295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.176440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.176467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.176610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.176636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.176756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.176781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.176917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.176943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.177060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.177085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.177231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.852 [2024-07-25 00:02:50.177264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.852 qpair failed and we were unable to recover it. 00:25:19.852 [2024-07-25 00:02:50.177384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.177410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.177558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.177584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.177730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.177756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.177867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.177893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.178031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.178056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.178228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.178259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.178405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.178430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.178569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.178594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.178762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.178787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.178926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.178955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.179089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.179117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.179255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.179281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.179408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.179433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.179548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.179574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.179727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.179753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.179891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.179917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.180063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.180088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.180247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.180274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.180411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.180437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.180540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.180566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.180676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.180701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.180846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.180871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.181017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.181043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.853 qpair failed and we were unable to recover it. 00:25:19.853 [2024-07-25 00:02:50.181187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.853 [2024-07-25 00:02:50.181214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.181380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.181406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.181539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.181565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.181676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.181701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.181821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.181847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.181965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.181990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.182126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.182151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.182277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.182315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.182430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.182456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.182569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.182594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.182714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.182739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.182840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.182866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.183007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.183032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.183170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.183198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.183384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.183411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.183524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.183550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.183719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.183745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.183886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.183912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.184022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.184049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.184193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.184221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.184378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.184404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.184523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.184549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.184690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.184716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.184854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.184879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.185016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.185041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.185178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.185204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.185342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.185369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.854 qpair failed and we were unable to recover it. 00:25:19.854 [2024-07-25 00:02:50.185520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.854 [2024-07-25 00:02:50.185546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.185681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.185706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.185816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.185843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.186012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.186038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.186141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.186167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.186302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.186328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.186468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.186493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.186608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.186634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.186771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.186797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.186936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.186962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.187085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.187110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.187254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.187280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.187421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.187446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.187572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.187597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.187753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.187780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.187924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.187949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.188087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.188123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.188247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.188273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.188441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.188467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.188607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.188632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.188772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.188797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.188937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.188962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.189105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.189131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.189262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.189296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.189437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.189464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.189576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.189601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.189749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.855 [2024-07-25 00:02:50.189774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.855 qpair failed and we were unable to recover it. 00:25:19.855 [2024-07-25 00:02:50.189920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.189947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.190062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.190087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.190227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.190274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.190449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.190475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.190632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.190658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.190798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.190824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.190940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.190965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.191131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.191156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.191269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.191299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.191414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.191439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.191565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.191591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.191728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.191753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.191889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.191915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.192031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.192057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.192171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.192197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.192346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.192372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.192495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.192530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.192642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.192667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.192804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.192830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.192970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.192996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.193129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.193154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.193321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.193347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.193461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.193487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.193628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.193653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.193770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.193796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.193903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.193928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.194040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.194066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.856 qpair failed and we were unable to recover it. 00:25:19.856 [2024-07-25 00:02:50.194189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.856 [2024-07-25 00:02:50.194218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.194350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.194376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.194493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.194521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.194664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.194690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.194847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.194872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.194999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.195025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.195191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.195216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.195374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.195400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.195568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.195595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.195737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.195762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.195892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.195918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.196082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.196107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.196251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.196302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.196418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.196445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.196626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.196653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.196773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.196798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.196919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.196945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.197085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.197110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.197224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.197258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.197376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.197401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.197541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.197567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.197712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.197738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.197889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.197915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.198055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.198080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.198189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.198216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.198366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.198393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.198506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.857 [2024-07-25 00:02:50.198531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.857 qpair failed and we were unable to recover it. 00:25:19.857 [2024-07-25 00:02:50.198709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.198739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.198856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.198883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.198999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.199025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.199165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.199190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.199343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.199369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.199481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.199506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.199615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.199641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.199806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.199831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.199945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.199970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.200107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.200132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.200280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.200314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.200424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.200450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.200593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.200619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.200740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.200765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.200887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.200913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.201027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.201055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.201205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.201230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.201363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.201390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.201501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.201527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.201636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.201662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.858 [2024-07-25 00:02:50.201779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.858 [2024-07-25 00:02:50.201805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.858 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.201915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.201941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.202057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.202082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.202202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.202228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.202352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.202379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.202523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.202549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.202671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.202696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.202811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.202840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.202982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.203008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.203139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.203165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.203273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.203311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.203431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.203459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.203601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.203627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.203733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.203758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.203871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.203896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.204002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.204027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.204153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.204178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.204303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.204329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.204467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.204492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.204634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.204659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.204773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.204798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.204906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.204932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.205047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.205073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.205197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.205222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.205394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.205431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.205592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.205626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.205745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.205773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.859 qpair failed and we were unable to recover it. 00:25:19.859 [2024-07-25 00:02:50.205893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.859 [2024-07-25 00:02:50.205919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.206051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.206078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.206221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.206260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.206392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.206419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.206532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.206559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.206699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.206726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.206869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.206895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.207039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.207078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.207219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.207252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.207371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.207396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.207518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.207545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.207700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.207727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.207886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.207912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.208030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.208057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.208182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.208209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.208357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.208389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.208512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.208539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.208677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.208703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.208872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.208899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.209069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.209101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.209219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.209252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.209415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.209440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.209559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.209590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.209707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.209733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.209854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.209889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.210038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.210064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.210202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.860 [2024-07-25 00:02:50.210228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.860 qpair failed and we were unable to recover it. 00:25:19.860 [2024-07-25 00:02:50.210396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.210427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.210573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.210598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.210744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.210775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.210921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.210949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.211115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.211141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.211260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.211287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.211411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.211436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.211554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.211586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.211721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.211748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.211893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.211920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.212027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.212052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.212177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.212205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.212352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.212390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.212507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.212533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.212654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.212680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.212799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.212825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.212965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.212990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.213104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.213130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.213293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.213319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.213426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.213452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.213581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.213606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.213761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.213787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.213926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.213952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.214117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.214142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.214272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.214299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.214419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.214445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.214564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.861 [2024-07-25 00:02:50.214597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.861 qpair failed and we were unable to recover it. 00:25:19.861 [2024-07-25 00:02:50.214744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.214770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.214891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.214916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.215019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.215044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.215153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.215179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.215345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.215372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.215521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.215546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.215683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.215709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.215864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.215890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.216016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.216042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.216182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.216208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.216337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.216363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.216532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.216559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.216704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.216729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.216894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.216920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.217043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.217068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.217214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.217240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.217366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.217391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.217534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.217559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.217674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.217700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.217837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.217864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.218038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.218063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.218205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.218231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.218355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.218381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.218517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.218542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.218711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.218737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.218859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.862 [2024-07-25 00:02:50.218884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.862 qpair failed and we were unable to recover it. 00:25:19.862 [2024-07-25 00:02:50.219030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.219056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.219205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.219231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.219429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.219455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.219566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.219592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.219701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.219726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.219851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.219877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.219998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.220024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.220141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.220167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.220288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.220318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.220438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.220463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.220608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.220633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.220790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.220815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.220924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.220950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.221112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.221137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.221280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.221306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.221417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.221442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.221578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.221603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.221739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.221764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.221911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.221937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.222044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.222070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.222191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.222216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.222370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.222396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.222513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.222539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.222651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.222678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.222798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.222824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.222961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.222986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.223131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.863 [2024-07-25 00:02:50.223156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.863 qpair failed and we were unable to recover it. 00:25:19.863 [2024-07-25 00:02:50.223274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.223300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.223442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.223467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.223604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.223630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.223791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.223816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.223926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.223953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.224078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.224105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.224256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.224289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.224404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.224430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.224562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.224592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.224711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.224737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.224881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.224906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.225050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.225077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.225190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.225216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.225376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.225402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.225520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.225545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.225664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.225689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.225856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.225881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.226010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.226035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.226176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.226202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.226315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.226343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.226490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.226515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.226678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.226704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.226819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.226846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.227013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.227039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.227156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.227181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.227289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.227316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.227431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.864 [2024-07-25 00:02:50.227456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.864 qpair failed and we were unable to recover it. 00:25:19.864 [2024-07-25 00:02:50.227632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.227658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.227770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.227795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.227917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.227947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.228086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.228111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.228238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.228272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.228417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.228443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.228586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.228611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.228754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.228779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.228893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.228919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.229039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.229065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.229176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.229201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.229329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.229355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.229496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.229521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.229632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.229658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.229829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.229855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.229987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.230012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.230150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.230176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.230315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.230341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.230456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.230481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.230620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.230646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.230759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.230785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.230928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.230955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.231088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.231126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.865 [2024-07-25 00:02:50.231273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.865 [2024-07-25 00:02:50.231307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.865 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.231460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.231488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.231637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.231666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.231793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.231820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.231962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.231988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.232171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.232198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.232348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.232375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.232515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.232540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.232661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.232688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.232858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.232884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.233020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.233051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.233168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.233193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.233311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.233337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.233485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.233520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.233635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.233661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.233774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.233799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.233914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.233939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.234104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.234129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.234268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.234297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.234439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.234464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.234576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.234601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.234758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.234784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.234945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.234970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.235090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.235115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.235283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.235309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.235421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.235447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.235561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.235591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.235713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.235739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.235914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.235940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.236064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.236090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.236237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.236279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.236409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.236434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.236576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.236604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.236770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.236801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.236927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.236953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.237098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.237124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.237258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.237285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.237453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.237479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.866 [2024-07-25 00:02:50.237585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.866 [2024-07-25 00:02:50.237612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.866 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.237803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.237844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.237957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.237983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.238124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.238150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.238278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.238311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.238425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.238451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.238609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.238648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.238775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.238804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.238923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.238958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.239104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.239130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.239274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.239300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.239413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.239439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.239583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.239609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.239742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.239768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.239905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.239930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.240088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.240117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.240267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.240295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.240418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.240445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.240607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.240638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.240785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.240812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.240959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.240987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.241099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.241124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.241288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.241315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.241454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.241481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.241600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.241626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.241746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.241778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.241924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.241960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.242105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.242131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.242254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.242285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.242424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.242451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.242583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.242609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.242786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.242817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.242929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.242972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.243108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.243134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.243306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.243337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.243494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.243520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.243665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.243691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.243835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.243860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.867 [2024-07-25 00:02:50.244004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.867 [2024-07-25 00:02:50.244031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.867 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.244173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.244205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.244363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.244389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.244532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.244564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.244718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.244744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.244887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.244913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.245027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.245053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.245199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.245236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.245357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.245382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.245543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.245570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.245681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.245714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.245860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.245887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.246020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.246047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.246173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.246198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.246324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.246355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.246526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.246552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.246677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.246704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.246827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.246859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.247001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.247027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.247168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.247199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.247355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.247382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.247502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.247545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.247695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.247721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.247847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.247874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.247993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.248019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.248138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.248164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.248272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.248298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.248422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.248448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.248592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.248617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.248759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.248786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.248915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.248945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.249091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.249123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.249249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.249281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.249404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.249430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.249552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.249578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.249686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.249712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.249868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.249894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.250005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.868 [2024-07-25 00:02:50.250032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.868 qpair failed and we were unable to recover it. 00:25:19.868 [2024-07-25 00:02:50.250153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.250179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.250313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.250339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.250464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.250491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.250660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.250687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.250842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.250868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.251013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.251041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.251195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.251221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.251371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.251422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.251574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.251612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.251757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.251784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.251901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.251927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.252068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.252094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.252225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.252259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.252401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.252427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.252574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.252600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.252708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.252735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.252866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.252892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.253001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.253027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.253172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.253199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.253371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.253403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.253534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.253560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.253704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.253730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.253871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.253898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.254010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.254035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.254188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.254214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.254348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.254375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.254489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.254515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.254652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.254678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.254806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.254832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.254971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.254997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.255115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.255141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.255260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.255297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.255425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.255451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.255590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.869 [2024-07-25 00:02:50.255615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.869 qpair failed and we were unable to recover it. 00:25:19.869 [2024-07-25 00:02:50.255769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.255795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.255911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.255936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.256080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.256105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.256254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.256281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.256391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.256418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.256531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.256556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.256693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.256718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.256843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.256878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.257033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.257071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.257233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.257268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.257416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.257444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.257559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.257584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.257711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.257744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.257920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.257948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.258070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.258096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.258222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.258255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.258384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.258411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.258521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.258546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.258654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.258684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.258836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.258864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.259006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.259032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.259206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.259231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.259391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.259417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.259560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.259585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.259706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.259731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.259873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.259900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.260047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.260073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.260201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.260251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.260425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.260452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.260604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.260631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.260755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.260781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.260908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.260934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.261090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.261120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.261247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.261281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.261436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.261463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.261639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.261665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.261829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.870 [2024-07-25 00:02:50.261855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.870 qpair failed and we were unable to recover it. 00:25:19.870 [2024-07-25 00:02:50.262023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.262056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.262208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.262235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.262409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.262448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.262600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.262627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.262742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.262767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.262896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.262924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.263079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.263104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.263250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.263277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.263389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.263414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.263588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.263614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.263754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.263779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.263888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.263913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.264056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.264082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.264202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.264228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.264364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.264391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.264559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.264585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.264729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.264755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.264896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.264921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.265071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.265097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.265230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.265266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.265387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.265413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.265586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.265611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.265721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.265746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.265885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.265910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.266029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.266055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.266164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.266189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.266316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.266346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.266498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.266524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.266640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.266672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.266820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.266847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.266990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.267027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.267171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.267197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.267352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.267378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.267523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.267548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.267688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.267713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.267886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.267912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.268050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.268075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.871 qpair failed and we were unable to recover it. 00:25:19.871 [2024-07-25 00:02:50.268195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.871 [2024-07-25 00:02:50.268220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.268388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.268416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.268531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.268563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.268700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.268728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.268870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.268896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.269066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.269093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.269253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.269291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.269412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.269445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.269599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.269625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.269752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.269779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.269896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.269922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.270061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.270088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.270213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.270239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.270402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.270429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.270549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.270580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.270705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.270732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.270841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.270872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.271030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.271057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.271196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.271223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.271370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.271396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.271503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.271535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.271648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.271674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.271810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.271835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.271989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.272016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.272153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.272179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.272333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.272363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.272507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.272533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.272683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.272710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.272873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.272910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.273063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.273091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.273247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.273275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.273400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.273427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.273562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.273617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.273735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.273762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.273904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.273931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.274056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.274083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.274203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.274230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.872 qpair failed and we were unable to recover it. 00:25:19.872 [2024-07-25 00:02:50.274381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.872 [2024-07-25 00:02:50.274407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.274557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.274583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.274723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.274750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.274893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.274927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.275039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.275065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.275238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.275274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.275425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.275453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.275604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.275643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.275797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.275825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.275986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.276015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.276166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.276193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.276318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.276345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.276499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.276528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.276686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.276714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.276866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.276903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.277067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.277095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.277270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.277304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.277448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.277474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.277629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.277655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.277768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.277795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.277922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.277949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.278099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.278126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.278274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.278311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.278488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.278515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.278672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.278699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.278823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.278850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.278970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.278996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.279164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.279191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.279312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.279340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.279453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.279479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.282254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.282295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.282445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.282473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.282643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.282671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.282817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.282843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.282964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.282990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.283103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.283134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.283259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.283285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.283432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.873 [2024-07-25 00:02:50.283459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.873 qpair failed and we were unable to recover it. 00:25:19.873 [2024-07-25 00:02:50.283590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.283616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.283762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.283789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.283920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.283946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.284108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.284135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.284266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.284293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.284451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.284485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.284679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.284715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.284901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.284932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.285076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.285103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.285212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.285238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.285389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.285416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.285570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.285596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.285770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.285796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.285903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.285928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.286070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.286096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.286213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.286238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.286422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.286449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.286572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.286597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.286741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.286767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.286878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.286903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.287083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.287110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.287276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.287308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.287428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.287454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.287582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.287609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.287751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.287776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.287892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.287925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.288047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.288073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.288216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.288249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.288369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.288401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.288526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.288551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.288698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.288724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.288870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.288895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.289046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.874 [2024-07-25 00:02:50.289081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.874 qpair failed and we were unable to recover it. 00:25:19.874 [2024-07-25 00:02:50.289227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.289259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.289404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.289437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.289589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.289615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.289755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.289781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.289900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.289931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.290076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.290103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.290251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.290279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.290398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.290425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.290573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.290599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.290750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.290777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.290902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.290930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.291049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.291075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.291197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.291226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.291373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.291401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.291512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.291539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.291689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.291728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.291890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.291921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.292068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.292097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.292239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.292273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.292420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.292446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.292596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.292622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.292760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.292787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.292921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.292947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.293118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.293144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.293302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.293328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.293471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.293499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.293645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.293671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.293785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.293818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.293965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.293991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.294139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.294165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.294298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.294325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.294469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.294495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.294603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.294628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.294769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.294796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.294912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.294938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.295103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.295136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.295262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.295290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.875 [2024-07-25 00:02:50.295438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.875 [2024-07-25 00:02:50.295469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.875 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.295587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.295614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.295782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.295808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.295917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.295942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.296084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.296120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.296259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.296286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.296396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.296426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.296561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.296591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.296756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.296785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.296914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.296941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.297088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.297121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.297272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.297299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.297438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.297469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.297644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.297669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.297772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.297802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.297918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.297944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.298054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.298081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.298211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.298237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.298370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.298396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.298517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.298544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.298687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.298716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.298843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.298870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.299021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.299051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.299172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.299199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.299357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.299385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.299542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.299567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.299686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.299714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.299858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.299884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.300017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.300043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.300192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.300219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.300361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.300400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.300544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.300570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.300715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.300741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.300855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.300880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.301024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.301053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.301207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.301234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.301395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.301421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.301569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.301595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.876 [2024-07-25 00:02:50.301719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.876 [2024-07-25 00:02:50.301750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.876 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.301897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.301923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.302039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.302066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.302189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.302215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.302377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.302404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.302550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.302575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.302685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.302715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.302858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.302884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.303004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.303029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.303178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.303256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.303422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.303460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.303660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.303698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.303829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.303861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.304038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.304064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.304214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.304239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.304378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.304405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.304521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.304548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.304692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.304723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.304868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.304894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.305032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.305062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.305206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.305233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.305398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.305424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.305581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.305607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.305760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.305787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.305892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.305917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.306025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.306052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.306196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.306221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.306385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.306412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.306531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.306556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.306699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.306724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.306865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.306891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.307046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.307072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.307214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.307252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.307373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.307400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.307517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.307547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.307747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.307774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.307953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.307984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.308100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.308131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.877 qpair failed and we were unable to recover it. 00:25:19.877 [2024-07-25 00:02:50.308278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.877 [2024-07-25 00:02:50.308305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.308428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.308455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.308588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.308614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.308737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.308764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.308935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.308961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.309108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.309139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.309263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.309289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.309432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.309459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.309580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.309607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.309729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.309756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.309870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.309895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.310008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.310039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.310182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.310209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.310349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.310375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.310518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.310548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.310698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.310725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.310840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.310872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.311020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.311046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.311187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.311214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.311363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.311389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.311534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.311566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.311694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.311719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.311864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.311892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.312057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.312087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.312221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.312261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.312391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.312418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.312564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.312591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.312739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.312766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.312913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.312946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.313087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.313113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.313247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.313279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.313428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.313455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.313633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.313658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.313797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.313823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.313975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.314001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.314146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.314172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.314293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.314320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.878 qpair failed and we were unable to recover it. 00:25:19.878 [2024-07-25 00:02:50.314461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.878 [2024-07-25 00:02:50.314489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.314640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.314669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.314822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.314854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.314999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.315025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.315141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.315168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.315310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.315341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.315462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.315489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.315637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.315673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.315797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.315824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.315968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.315994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.316134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.316161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.316309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.316341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.316486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.316512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.316634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.316660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.316786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.316819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.316957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.316983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.317108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.317149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.317268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.317297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.317447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.317474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.317598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.317628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.317760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.317786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.317896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.317922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.318065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.318092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.318210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.318236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.318399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.318428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.318582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.318608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.318727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.318755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.318870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.318896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.319046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.319073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.319184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.319209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.319364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.319398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.319519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.319544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.879 qpair failed and we were unable to recover it. 00:25:19.879 [2024-07-25 00:02:50.319655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.879 [2024-07-25 00:02:50.319681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.319826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.319857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.319987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.320014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.320154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.320179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.320356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.320382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.320515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.320540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.320695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.320720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.320849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.320874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.321031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.321056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.321181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.321208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.321366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.321393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.321538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.321563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.321730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.321756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.321890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.321915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.322055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.322080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.322253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.322279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.322405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.322431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.322545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.322570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.322742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.322768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.322887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.322912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.323048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.323074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.323209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.323234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.323405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.323434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.323587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.323612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.323774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.323799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.323942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.323968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.324121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.324147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.324266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.324292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.324432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.324458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.324562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.324587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.324725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.324750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.324863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.324888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.325030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.325055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.325193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.325219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.325378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.325404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.325514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.325539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.325723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.325748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.325896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.880 [2024-07-25 00:02:50.325922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.880 qpair failed and we were unable to recover it. 00:25:19.880 [2024-07-25 00:02:50.326086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.326111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.326256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.326281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.326413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.326440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.326583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.326608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.326720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.326747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.326893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.326919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.327057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.327083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.327221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.327251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.327387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.327412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.327554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.327579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.327747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.327772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.327919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.327945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.328085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.328110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.328218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.328250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.328380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.328405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.328576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.328601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.328752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.328777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.328896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.328921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.329036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.329061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.329207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.329232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.329376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.329402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.329546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.329571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.329713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.329739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.329888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.329913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.330027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.330053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.330202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.330228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.330380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.330405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.330545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.330570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.330680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.330706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.330878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.330904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.331041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.331066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.331207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.331232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.331363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.331389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.331557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.331582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.331693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.331718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.331867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.331892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.332050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.332074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.881 qpair failed and we were unable to recover it. 00:25:19.881 [2024-07-25 00:02:50.332185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.881 [2024-07-25 00:02:50.332211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.332378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.332404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.332527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.332553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.332720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.332745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.332866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.332891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.333026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.333050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.333197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.333223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.333376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.333402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.333541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.333566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.333684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.333709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.333873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.333898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.334031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.334057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.334216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.334265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.334380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.334406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.334529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.334564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.334713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.334739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.334871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.334896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.335045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.335070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.335228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.335262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.335414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.335440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.335557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.335583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.335703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.335729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.335849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.335875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.336012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.336038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.336177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.336203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.336321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.336347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.336514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.336540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.336683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.336710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.336835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.336861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.336973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.336998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.337137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.337162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.337311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.337337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.337481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.337507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.337648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.337674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.337810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.337836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.337970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.337995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.338116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.338141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.338262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.882 [2024-07-25 00:02:50.338289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.882 qpair failed and we were unable to recover it. 00:25:19.882 [2024-07-25 00:02:50.338443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.338469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.338622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.338647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.338813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.338838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.339007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.339032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.339149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.339175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.339325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.339351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.339462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.339487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.339657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.339682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.339852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.339878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.340020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.340046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.340166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.340192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.340349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.340374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.340491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.340517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.340692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.340717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.340865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.340890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.341006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.341031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.341153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.341183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.341303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.341329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.341474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.341499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.341638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.341663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.341803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.341828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.341978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.342004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.342148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.342175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.342303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.342331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.342445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.342472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.342655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.342681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.342824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.342849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.342987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.343012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.343156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.343181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.343302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.343328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.343465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.343490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.343633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.343659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.883 [2024-07-25 00:02:50.343768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.883 [2024-07-25 00:02:50.343794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.883 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.343929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.343955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.344095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.344120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.344259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.344286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.344429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.344454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.344597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.344622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.344764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.344789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.344932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.344957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.345119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.345144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.345256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.345282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.345447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.345473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.345617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.345642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.345759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.345785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.345898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.345923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.346089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.346114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.346254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.346280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.346451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.346476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.346619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.346644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.346789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.346814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.346960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.346985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.347129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.347154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.347268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.347297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.347466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.347491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.347655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.347680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.347819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.347848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.347964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.347989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.348130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.348155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.348270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.348297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.348445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.348470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.348605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.348630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.348767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.348792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.348909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.348935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.349038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.349063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.349178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.349203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.349381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.349407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.349584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.349609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.349729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.349754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.349882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.349907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.884 [2024-07-25 00:02:50.350065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.884 [2024-07-25 00:02:50.350090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.884 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.350226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.350264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.350435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.350460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.350575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.350600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.350715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.350740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.350868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.350895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.351037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.351062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.351228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.351260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.351378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.351403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.351543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.351568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.351709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.351735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.351871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.351896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.352036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.352061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.352205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.352231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.352369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.352396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.352537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.352562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.352717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.352742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.352851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.352877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.353028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.353053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.353194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.353219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.353377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.353402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.353551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.353576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.353712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.353738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.353862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.353889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.354042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.354067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.354238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.354268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.354412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.354443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.354609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.354634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.354772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.354797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.354937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.354962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.355102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.355127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.355270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.355295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.355411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.355435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.355577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.355603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.355752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.355778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.355920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.355947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.356060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.356086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.356226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.356256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.885 qpair failed and we were unable to recover it. 00:25:19.885 [2024-07-25 00:02:50.356393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.885 [2024-07-25 00:02:50.356418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.356563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.356588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.356705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.356731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.356883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.356908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.357048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.357074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.357214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.357239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.357359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.357384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.357502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.357527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.357698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.357724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.357860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.357885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.358026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.358050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.358217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.358255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.358380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.358405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.358543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.358569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.358712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.358737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.358859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.358885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.359022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.359048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.359218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.359249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.359363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.359388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.359532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.359558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.359673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.359698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.359835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.359860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.360000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.360025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.360170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.360195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.360307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.360332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.360451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.360477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.360622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.360647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.360788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.360813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.360924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.360954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.361103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.361128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.361252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.361279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.361419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.361445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.361557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.361582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.361728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.361753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.361864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.361889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.362037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.362063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.362177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.886 [2024-07-25 00:02:50.362201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.886 qpair failed and we were unable to recover it. 00:25:19.886 [2024-07-25 00:02:50.362322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.362347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.362480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.362504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.362615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.362639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.362813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.362838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.362950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.362975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.363150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.363175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.363290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.363315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.363481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.363505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.363616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.363642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.363810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.363835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.363970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.363995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.364157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.364182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.364321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.364348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.364513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.364542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.364689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.364714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.364860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.364888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.365007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.365032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.365175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.365202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.365328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.365354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.365498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.365525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.365679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.365704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.365841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.365868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.365986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.366011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.366160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.366186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.366300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.366327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.366502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.366527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.366665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.366694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.366867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.366893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.367071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.367097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.367267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.367294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.367414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.367440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.367593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.367623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.367738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.367764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.367911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.367943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.368092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.368125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.368272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.368299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.368415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.368448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.368589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.368614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.368732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.368760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.368908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.887 [2024-07-25 00:02:50.368935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.887 qpair failed and we were unable to recover it. 00:25:19.887 [2024-07-25 00:02:50.369092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.369118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.369259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.369286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.369412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.369439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.369582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.369612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.369741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.369766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.369916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.369944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.370063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.370089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.370208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.370233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.370366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.370393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.370499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.370524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.370639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.370665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.370810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.370835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.370965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.370992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.371146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.371173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.371298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.371325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.371457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.371482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.371634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.371662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.371793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.371819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.371979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.372005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.372151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.372183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.372328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.372354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.372476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.372505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.372642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.372675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.372795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.372821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.372988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.373014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.373122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.373148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.373316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.373345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.373458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.373483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.373597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.373624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.373761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.373788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.373937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.373963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.374080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.374110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.374228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.374261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.374436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.374464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.374645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.374670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.374922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.374949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.888 qpair failed and we were unable to recover it. 00:25:19.888 [2024-07-25 00:02:50.375094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.888 [2024-07-25 00:02:50.375120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.375232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.375265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.375444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.375470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.375582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.375608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.375726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.375753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.375892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.375918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.376041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.376075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.376231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.376266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.376408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.376434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.376561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.376587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.376699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.376724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.376881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.376915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.377047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.377076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.380365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.380393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.380530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.380557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.380710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.380735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.380862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.380889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.381040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.381068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.381225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.381259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.381385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.381411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.381543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.381571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.381694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.381720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.381899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.381942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.382113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.382148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.382337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.382373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.382507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.382539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.382672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.382706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.382870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.382903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.383028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.383056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.383200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.383226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.383362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.383388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.383503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.383532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.383676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.383701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.383873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.383900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.384013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.384040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.384164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.384198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.384338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.384365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.384488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.384518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.384669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.384695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.384823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.384850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.384996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.889 [2024-07-25 00:02:50.385022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.889 qpair failed and we were unable to recover it. 00:25:19.889 [2024-07-25 00:02:50.385166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.385192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.385323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.385350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.385492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.385521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.385661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.385686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.385842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.385868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.385974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.386004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.386123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.386149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.386262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.386288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.386464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.386490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.386626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.386654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.386793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.386823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.386937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.386968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.387088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.387114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.387258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.387291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.387423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.387450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.387593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.387619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.387751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.387777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.387895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.387921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.388063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.388089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.388230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.388274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.388428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.388455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.388604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.388632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.388770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.388796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.388943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.388968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.389090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.389115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.389264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.389291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.389418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.389443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.389598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.389624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.389792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.389818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.389972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.389997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.390140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.390166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.390288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.390314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.390460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.390485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.390589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.390614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.390755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.390785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.390925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.390950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.391094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.391120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.391259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.391285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.391440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.391465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.391573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.391598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.391752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.391777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.391944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.391969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.392081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.392106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.890 [2024-07-25 00:02:50.392254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.890 [2024-07-25 00:02:50.392280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.890 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.392415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.392441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.392584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.392609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.392750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.392776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.392922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.392948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.393123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.393149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.393291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.393317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.393427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.393452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.393585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.393610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.393728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.393754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.393869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.393894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.394012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.394037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.394156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.394181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.394327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.394354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.394492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.394517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.394631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.394656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.394799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.394825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.395036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.395062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.395238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.395273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.395378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.395404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.395546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.395573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.395722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.395748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.395867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.395892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.396033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.396058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.396171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.396197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.396307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.396334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.396442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.396466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.396609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.396634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.396775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.396801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.396944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.396969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.397113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.397138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.397272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.397302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.397452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.397477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.397616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.397641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.397780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.397805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.397942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.397967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.398081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.398106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.398250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.398275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.398385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.398410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.398518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.398543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.398682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.398707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.398849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.398875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.398995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.399020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.399184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.399209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.891 [2024-07-25 00:02:50.399333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.891 [2024-07-25 00:02:50.399358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.891 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.399507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.399532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.399676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.399703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.399873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.399899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.400036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.400061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.400203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.400228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.400374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.400400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.400536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.400561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.400697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.400722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.400867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.400894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.401015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.401042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.401151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.401177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.401302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.401328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.401475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.401501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.401645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.401670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.401805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.401830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.401983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.402008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.402145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.402170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.402320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.402345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.402469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.402494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.402634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.402659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.402807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.402832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.402998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.403023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.403188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.403213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.403365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.403390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.403515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.403541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.403698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.403722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.403940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.403969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.404092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.404117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.404283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.404320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.404450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.404476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.404582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.404608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.404771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.404796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.405014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.405041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.405180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.405205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.405354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.405380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.405523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.405548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.405686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.405712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.405857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.405882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.406020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.892 [2024-07-25 00:02:50.406045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.892 qpair failed and we were unable to recover it. 00:25:19.892 [2024-07-25 00:02:50.406188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.406213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.406334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.406360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.406501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.406526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.406645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.406670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.406812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.406837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.406986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.407011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.407148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.407174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.407286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.407311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.407421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.407446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.407576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.407616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.407766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.407793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.407958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.407984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.408127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.408153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.408297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.408324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.408517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.408558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.408692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.408719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.408885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.408911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.409025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.409051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.409189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.409215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.409388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.409414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.409581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.409607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.409762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.409787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.409896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.409921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.410064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.410089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.410201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.410226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.410377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.410402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.410548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.410573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.410713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.410738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.410900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.410925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.411069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.411096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.411214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.411240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.411385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.411410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.411523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.411548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.411662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.411688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.893 [2024-07-25 00:02:50.411829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.893 [2024-07-25 00:02:50.411854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.893 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.411998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.412024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.412159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.412184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.412318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.412344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.412458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.412483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.412656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.412681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.412818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.412843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.413011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.413041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.413209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.413235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.413381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.413406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.413574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.413599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.413738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.413763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.413901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.413927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.414066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.414091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.414213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.414238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.414386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.414411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.414578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.414603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.414727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.414753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.414861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.414886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.415025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.415050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.415157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.415183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.415302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.415330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.415440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.415466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.415600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.415626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.415743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.415770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.415893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.415933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.416083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.416112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.416285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.416313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.416484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.416510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.416659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.416685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.416834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.416860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.417005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.417032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.417224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.417270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.417451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.417478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.417706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.417737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.417910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.417935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.418051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.418076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.418218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.894 [2024-07-25 00:02:50.418248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.894 qpair failed and we were unable to recover it. 00:25:19.894 [2024-07-25 00:02:50.418363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.418388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.418532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.418558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.418672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.418699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.418822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.418848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.419026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.419050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.419222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.419253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.419434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.419460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.419607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.419632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.419747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.419772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.419917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.419942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.420117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.420142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.420259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.420285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.420389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.420415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.420554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.420579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.420695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.420722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.420891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.420916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.421024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.421049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.421217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.421249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.421367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.421393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.421533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.421558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.421692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.421717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.421860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.421886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.422037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.422063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.422186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.422212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.422379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.422418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.422568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.422596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.422736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.422763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.422877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.422903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.423048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.423073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.423227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.423261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.423379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.423405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.423548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.423574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.423723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.423749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.423888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.423914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.424032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.424059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.424211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.424257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:19.895 qpair failed and we were unable to recover it. 00:25:19.895 [2024-07-25 00:02:50.424393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.895 [2024-07-25 00:02:50.424436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.896 qpair failed and we were unable to recover it. 00:25:19.896 [2024-07-25 00:02:50.424586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.896 [2024-07-25 00:02:50.424613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.896 qpair failed and we were unable to recover it. 00:25:19.896 [2024-07-25 00:02:50.424746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.896 [2024-07-25 00:02:50.424772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.896 qpair failed and we were unable to recover it. 00:25:19.896 [2024-07-25 00:02:50.424911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.896 [2024-07-25 00:02:50.424936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.896 qpair failed and we were unable to recover it. 00:25:19.896 [2024-07-25 00:02:50.425047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.896 [2024-07-25 00:02:50.425073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:19.896 qpair failed and we were unable to recover it. 00:25:19.896 [2024-07-25 00:02:50.425193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:19.896 [2024-07-25 00:02:50.425221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:19.896 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.425372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.425398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.425515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.425541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.425707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.425734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.425849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.425876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.425992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.426018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.426158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.426184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.426294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.426320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.426434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.426460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.426602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.426628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.426771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.426797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.426906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.426932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.427079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.427106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.427263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.427302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.427454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.427481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.427633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.427661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.427771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.427797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.427935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.427960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.428081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.428106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.428253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.428279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.428391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.428416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.428533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.428559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.428690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.428729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.428908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.428936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.179 qpair failed and we were unable to recover it. 00:25:20.179 [2024-07-25 00:02:50.429052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.179 [2024-07-25 00:02:50.429080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.429202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.429229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.429381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.429408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.429551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.429577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.429741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.429768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.429895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.429921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.430062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.430088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.430253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.430281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.430429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.430455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.430605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.430630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.430752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.430777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.430943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.430968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.431085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.431111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.431252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.431279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.431428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.431454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.431600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.431628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.431817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.431843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.431982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.432008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.432138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.432164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.432309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.432337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.432479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.432504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.432642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.432668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.432783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.432808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.432946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.432971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.433085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.433111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.433261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.433289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.433435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.433461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.433583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.433610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.433732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.433758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.433873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.433899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.434017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.434044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.434160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.434187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.434310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.434337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.434478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.434506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.434653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.434679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.434842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.434868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.435008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.435034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.435198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.180 [2024-07-25 00:02:50.435227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.180 qpair failed and we were unable to recover it. 00:25:20.180 [2024-07-25 00:02:50.435380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.435410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.435530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.435555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.435681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.435706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.435876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.435902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.436067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.436093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.436240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.436284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.436407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.436441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.436615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.436641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.436787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.436814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.436936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.436962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.437106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.437133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.437275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.437302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.437449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.437475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.437587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.437613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.437739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.437765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.437904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.437929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.438054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.438080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.438252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.438280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.438425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.438452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.438565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.438589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.438730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.438755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.438901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.438925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.439043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.439069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.439218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.439249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.439368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.439394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.439536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.439561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.439713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.439740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.439909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.439938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.440054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.440079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.440253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.440279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.440399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.440424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.440563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.440588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.440699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.440725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.440860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.440886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.440996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.441022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.441142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.441167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.441301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.441327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.441471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.181 [2024-07-25 00:02:50.441496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.181 qpair failed and we were unable to recover it. 00:25:20.181 [2024-07-25 00:02:50.441611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.441636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.441769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.441795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.441907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.441933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.442073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.442098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.442218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.442249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.442365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.442391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.442536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.442562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.442705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.442730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.442867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.442893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.443030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.443055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.443217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.443247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.443365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.443391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.443526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.443551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.443691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.443718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.443832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.443858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.443994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.444019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.444159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.444188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.444321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.444348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.444467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.444492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.444604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.444630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.444746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.444773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.444913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.444939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.445057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.445082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.445197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.445223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.445372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.445398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.445541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.445568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.445680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.445706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.445879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.445904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.446034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.446059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.446175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.446200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.446349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.446376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.446518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.446544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.446682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.446707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.446861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.446887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.447025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.447051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.447162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.447187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.182 qpair failed and we were unable to recover it. 00:25:20.182 [2024-07-25 00:02:50.447336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.182 [2024-07-25 00:02:50.447362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.447478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.447504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.447647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.447672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.447780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.447805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.447974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.447998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.448150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.448175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.448314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.448340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.448484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.448509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.448678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.448704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.448818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.448844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.448986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.449013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.449150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.449175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.449296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.449322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.449437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.449462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.449602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.449628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.449738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.449764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.449882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.449907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.450052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.450078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.450218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.450249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.450395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.450421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.450569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.450594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.450742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.450768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.450894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.450920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.451034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.451059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.451197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.451223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.451342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.451367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.451480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.451505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.451621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.451647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.451766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.451791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.451903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.451930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.452098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.452124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.452270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.452296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.452459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.452485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.452592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.452618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.452731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.452757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.183 qpair failed and we were unable to recover it. 00:25:20.183 [2024-07-25 00:02:50.452903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.183 [2024-07-25 00:02:50.452928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.453040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.453067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.453178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.453203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.453353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.453379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.453500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.453525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.453640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.453665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.453817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.453842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.453958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.453983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.454093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.454118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.454261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.454287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.454427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.454453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.454593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.454618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.454731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.454756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.454861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.454890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.455031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.455056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.455194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.455219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.455365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.455391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.455560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.455586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.455699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.455725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.455855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.455881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.455993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.456019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.456170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.456196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.456329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.456356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.456493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.456518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.456631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.456657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.456799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.456824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.456972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.456997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.457125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.457152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.457274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.457300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.457440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.457465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.457633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.457659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.457826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.457852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.457963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.457989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.458102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.184 [2024-07-25 00:02:50.458128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.184 qpair failed and we were unable to recover it. 00:25:20.184 [2024-07-25 00:02:50.458248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.458274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.458409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.458434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.458552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.458577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.458714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.458739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.458864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.458889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.459002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.459029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.459136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.459166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.459315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.459340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.459460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.459486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.459620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.459645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.459762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.459788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.459898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.459924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.460031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.460056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.460196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.460222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.460363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.460388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.460554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.460579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.460704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.460730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.460860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.460885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.461023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.461049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.461163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.461188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.461337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.461364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.461483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.461509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.461679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.461704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.461810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.461835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.461983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.462008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.462143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.462168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.462276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.462302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.462413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.462438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.462554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.462579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.462744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.462769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.462908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.462933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.463051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.463077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.463253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.463279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.463422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.463451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.463597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.463622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.463773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.463798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.463945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.463970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.464112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.464137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.185 qpair failed and we were unable to recover it. 00:25:20.185 [2024-07-25 00:02:50.464282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.185 [2024-07-25 00:02:50.464308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.464420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.464446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.464619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.464645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.464788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.464813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.464960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.464985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.465100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.465125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.465257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.465282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.465395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.465420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.465533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.465558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.465738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.465776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.465916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.465943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.466102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.466130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.466287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.466314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.466457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.466484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.466604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.466630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.466754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.466783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.466934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.466961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.467098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.467125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.467260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.467287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.467428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.467454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.467582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.467608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.467734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.467760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.467901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.467931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.468079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.468104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.468220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.468251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.468390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.468415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.468551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.468577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.468689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.468714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.468863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.468888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.469028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.469053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.469167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.469192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.469335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.469361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.469489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.469515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.469637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.469662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.469789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.469814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.469985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.470010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.470125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.470150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.470318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.470345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.186 [2024-07-25 00:02:50.470487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.186 [2024-07-25 00:02:50.470512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.186 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.470663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.470689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.470803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.470829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.470974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.470999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.471148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.471177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.471290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.471316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.471460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.471486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.471642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.471667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.471781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.471806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.471975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.472000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.472113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.472139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.472279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.472309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.472415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.472441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.472572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.472597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.472768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.472793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.472943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.472968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.473079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.473106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.473222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.473260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.473405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.473430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.473596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.473622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.473742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.473767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.473882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.473908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.474050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.474076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.474193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.474219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.474369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.474396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.474541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.474569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.474680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.474705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.474814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.474840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.474980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.475006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.475188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.475227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.475356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.475384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.475533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.475559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.475696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.475722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.475863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.475889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.476034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.476062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.476173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.476198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.476346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.476372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.476542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.476567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.187 qpair failed and we were unable to recover it. 00:25:20.187 [2024-07-25 00:02:50.476712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.187 [2024-07-25 00:02:50.476745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.476928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.476953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.477100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.477126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.477248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.477274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.477415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.477440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.477579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.477604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.477745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.477770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.477889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.477914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.478055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.478081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.478192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.478218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.478336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.478362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.478507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.478532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.478666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.478692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.478807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.478832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.479008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.479035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.479180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.479205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.479369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.479395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.479545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.479574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.479719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.479745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.479896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.479923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.480033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.480059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.480205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.480231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.480384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.480410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.480546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.480573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.480700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.480725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.480866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.480891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.481030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.481056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.481204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.481257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.481414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.481441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.481562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.481589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.481704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.481729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.188 [2024-07-25 00:02:50.481873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.188 [2024-07-25 00:02:50.481899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.188 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.482043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.482069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.482181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.482207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.482360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.482386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.482499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.482526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.482667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.482693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.482834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.482859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.482978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.483003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.483117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.483143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.483281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.483307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.483428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.483453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.483590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.483615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.483762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.483787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.483909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.483935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.484053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.484078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.484189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.484215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.484343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.484382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.484517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.484555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.484731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.484757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.484875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.484901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.485044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.485070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.485185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.485210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.485370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.485397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.485568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.485593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.485706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.485733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.485857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.485882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.486002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.486028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.486182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.486208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.486355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.486381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.486514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.486540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.486653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.486679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.486826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.486852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.487031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.487056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.487169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.487194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.487309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.487336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.487471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.487497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.487649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.487674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.487868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.189 [2024-07-25 00:02:50.487894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.189 qpair failed and we were unable to recover it. 00:25:20.189 [2024-07-25 00:02:50.488041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.488066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.488206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.488232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.488382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.488407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.488571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.488597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.488732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.488758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.488894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.488919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.489024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.489050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.489170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.489195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.489360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.489398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.489524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.489550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.489693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.489719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.489863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.489890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.490066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.490092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.490235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.490267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.490402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.490428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.490573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.490598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.490766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.490792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.490935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.490961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.491083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.491109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.491226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.491258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.491404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.491429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.491536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.491562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.491714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.491740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.491885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.491910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.492052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.492076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.492185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.492215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.492342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.492381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.492535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.492562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.492694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.492719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.492875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.492901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.493063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.493088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.493207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.493232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.493352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.493377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.493499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.493524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.493664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.493689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.493806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.493831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.493971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.493996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.494137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.494162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.190 qpair failed and we were unable to recover it. 00:25:20.190 [2024-07-25 00:02:50.494313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.190 [2024-07-25 00:02:50.494341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.494486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.494513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.494657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.494682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.494825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.494850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.495008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.495033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.495182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.495207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.495355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.495382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.495524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.495549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.495662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.495687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.495818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.495843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.495993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.496018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.496136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.496161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.496319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.496346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.496487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.496514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.496634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.496665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.496806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.496831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.496941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.496966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.497133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.497158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.497325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.497351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.497467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.497493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.497638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.497664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.497787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.497812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.497956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.497982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.498122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.498148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.498277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.498305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.498451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.498476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.498623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.498648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.498793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.498818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.498936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.498962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.499130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.499155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.499302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.499328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.499446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.499471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.499608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.499633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.499747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.499772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.499895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.499920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.500030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.500055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.500225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.500258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.500374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.191 [2024-07-25 00:02:50.500399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.191 qpair failed and we were unable to recover it. 00:25:20.191 [2024-07-25 00:02:50.500537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.500562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.500698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.500723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.500895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.500920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.501058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.501088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.501210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.501235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.501369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.501395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.501545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.501571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.501716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.501741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.501852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.501878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.501993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.502018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.502157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.502182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.502323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.502349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.502467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.502492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.502611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.502636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.502751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.502778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.502900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.502927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.503093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.503119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.503288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.503314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.503452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.503478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.503619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.503645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.503758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.503783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.503896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.503922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.504064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.504090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.504264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.504290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.504401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.504427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.504565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.504591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.504710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.504739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.504849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.504875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.505019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.505044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.505219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.505254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.505397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.505427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.505582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.505608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.505761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.505786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.505930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.505955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.506101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.506127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.506239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.506271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.506412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.506437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.506590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.506615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.192 qpair failed and we were unable to recover it. 00:25:20.192 [2024-07-25 00:02:50.506729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.192 [2024-07-25 00:02:50.506754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.506905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.506930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.507071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.507096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.507201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.507227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.507378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.507403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.507520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.507545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.507698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.507724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.507865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.507890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.508025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.508049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.508170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.508195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.508336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.508378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.508525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.508550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.508664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.508689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.508813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.508837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.508978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.509004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.509120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.509145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.509286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.509312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.509434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.509461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.509603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.509629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.509750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.509775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.509901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.509926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.510064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.510090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.510235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.510267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.510406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.510431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.510539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.510564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.510709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.510734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.510871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.510897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.511068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.511093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.511238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.511268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.511411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.511436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.511554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.511579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.511686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.511712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.511836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.511863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.193 qpair failed and we were unable to recover it. 00:25:20.193 [2024-07-25 00:02:50.511983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.193 [2024-07-25 00:02:50.512009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.512122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.512148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.512257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.512284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.512421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.512446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.512553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.512579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.512687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.512712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.512831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.512856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.512971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.512998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.513166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.513191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.513334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.513360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.513474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.513499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.513643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.513669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.513792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.513817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.513930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.513957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.514105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.514130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.514253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.514279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.514421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.514447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.514600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.514625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.514769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.514795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.514961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.514986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.515129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.515154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.515275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.515302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.515415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.515440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.515607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.515632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.515751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.515776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.515944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.515970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.516115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.516142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.516290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.516320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.516467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.516492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.516629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.516654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.516797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.516822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.516932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.516957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.517124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.517150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.517264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.517290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.517402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.194 [2024-07-25 00:02:50.517428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.194 qpair failed and we were unable to recover it. 00:25:20.194 [2024-07-25 00:02:50.517540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.517566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.517708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.517733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.517903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.517929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.518046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.518072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.518190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.518215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.518385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.518411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.518531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.518557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.518685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.518711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.518876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.518902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.519033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.519059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.519189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.519214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.519353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.519379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.519522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.519547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.519681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.519706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.519815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.519840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.519978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.520003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.520172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.520198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.520316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.520343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.520486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.520512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.520678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.520707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.520826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.520851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.520983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.521009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.521156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.521181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.521294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.521321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.521454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.521480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.521618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.521643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.521777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.521803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.521968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.521993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.522132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.522157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.522330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.522356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.522475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.522501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.522663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.522688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.195 [2024-07-25 00:02:50.522831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.195 [2024-07-25 00:02:50.522855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.195 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.523027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.523053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.523170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.523196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.523359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.523385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.523498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.523523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.523632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.523656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.523791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.523816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.523932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.523958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.524102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.524128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.524270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.524296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.524407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.524432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.524539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.524565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.524678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.524703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.524836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.524861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.525000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.525025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.525137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.525163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.525273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.525300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.525452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.525477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.525615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.525641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.525793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.525818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.525963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.525989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.526133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.526158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.526273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.526299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.526416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.526441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.526550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.526576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.526717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.526742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.526881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.526906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.527073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.527099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.527246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.527272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.527439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.527464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.527583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.527608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.527725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.527750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.527886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.527912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.528066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.528091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.528207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.528233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.528392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.528418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.528566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.528591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.528726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.528752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.196 qpair failed and we were unable to recover it. 00:25:20.196 [2024-07-25 00:02:50.528864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.196 [2024-07-25 00:02:50.528889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.529057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.529082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.529224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.529256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.529436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.529461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.529583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.529609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.529751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.529776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.529884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.529909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.530019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.530044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.530190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.530215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.530360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.530385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.530521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.530546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.530679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.530705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.530845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.530871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.531033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.531058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.531197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.531223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.531346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.531372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.531502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.531527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.531665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.531694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.531796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.531822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.531965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.531990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.532106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.532132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.532275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.532301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.532413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.532439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.532612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.532637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.532764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.532789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.532894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.532919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.533036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.533061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.533197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.533223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.533378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.533404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.533551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.533576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.197 [2024-07-25 00:02:50.533692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.197 [2024-07-25 00:02:50.533717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.197 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.533859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.533884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.534003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.534029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.534199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.534224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.534350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.534375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.534518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.534545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.534688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.534714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.534852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.534877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.535012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.535037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.535152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.535178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.535324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.535350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.535499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.535525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.535638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.535665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.535809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.535835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.535946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.535976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.536107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.536132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.536238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.536269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.536395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.536421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.536572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.536597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.536735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.536761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.536892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.536918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.537067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.537092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.537213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.537238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.537399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.537424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.537543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.537568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.537733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.537758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.537868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.537895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.538063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.538089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.538237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.538269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.538421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.538447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.538592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.538618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.538736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.538761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.198 [2024-07-25 00:02:50.538876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.198 [2024-07-25 00:02:50.538903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.198 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.539044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.539069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.539209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.539235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.539381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.539406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.539513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.539538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.539674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.539699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.539843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.539869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.539982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.540009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.540150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.540177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.540334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.540367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.540481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.540508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.540656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.540681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.540791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.540816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.540955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.540981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.541126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.541151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.541266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.541292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.541425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.541450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.541593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.541619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.541759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.541784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.541900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.541925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.542093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.542118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.542285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.542312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.542451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.542476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.542594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.542620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.542787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.542812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.542926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.542951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.543119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.543144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.543273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.543299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.543452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.543478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.543645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.199 [2024-07-25 00:02:50.543670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.199 qpair failed and we were unable to recover it. 00:25:20.199 [2024-07-25 00:02:50.543814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.543839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.543953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.543979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.544110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.544135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.544254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.544280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.544400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.544425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.544594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.544620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.544733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.544758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.544896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.544922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.545033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.545058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.545181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.545207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.545334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.545360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.545474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.545499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.545649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.545675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.545820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.545845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.546008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.546033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.546174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.546199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.546329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.546355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.546467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.546493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.546635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.546662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.546801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.546826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.546997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.547022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.547162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.547187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.547332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.547357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.547472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.547497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.547676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.547702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.547816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.547842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.547958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.547983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.548151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.548177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.548280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.548306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.548422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.548448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.548594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.548619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.548765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.548790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.548927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.548952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.549065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.549090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.549233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.549281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.549428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.549454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.549623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.549648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.549783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.200 [2024-07-25 00:02:50.549808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.200 qpair failed and we were unable to recover it. 00:25:20.200 [2024-07-25 00:02:50.549978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.550004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.550131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.550156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.550270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.550296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.550441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.550466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.550602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.550627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.550769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.550794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.550911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.550936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.551056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.551081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.551223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.551252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.551362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.551392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.551512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.551537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.551704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.551729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.551845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.551871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.552012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.552037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.552172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.552197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.552342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.552368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.552481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.552506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.552644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.552669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.552814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.552841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.553007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.553033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.553151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.553176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.553339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.553365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.553482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.553507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.553654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.553680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.553818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.553844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.553984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.554009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.554176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.554202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.554327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.554353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.554494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.554519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.554636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.554661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.554826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.554851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.554967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.554993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.201 [2024-07-25 00:02:50.555128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.201 [2024-07-25 00:02:50.555154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.201 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.555298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.555324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.555463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.555488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.555606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.555632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.555769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.555798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.555968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.555993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.556133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.556158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.556303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.556329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.556446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.556473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.556614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.556640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.556805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.556831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.556941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.556968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.557135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.557160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.557296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.557322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.557468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.557493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.557631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.557656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.557801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.557826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.557934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.557959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.558097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.558123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.558236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.558267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.558383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.558408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.558549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.558574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.558691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.558716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.558860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.558885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.559024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.559049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.559159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.559185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.559358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.559384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.559498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.559523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.559665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.559690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.559830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.559855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.559990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.560015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.560180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.560205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.560364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.560390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.202 qpair failed and we were unable to recover it. 00:25:20.202 [2024-07-25 00:02:50.560553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.202 [2024-07-25 00:02:50.560578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.560686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.560711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.560836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.560861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.561004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.561029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.561167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.561192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.561337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.561363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.561501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.561527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.561666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.561690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.561839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.561865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.561988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.562014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.562182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.562208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.562377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.562403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.562576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.562601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.562714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.562739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.562854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.562880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.563012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.563037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.563152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.563178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.563321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.563348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.563514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.563540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.563658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.563683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.563799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.563826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.563963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.563988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.564127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.564153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.564295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.564321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.564464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.564493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.564597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.564623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.564738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.564764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.564911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.564937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.565073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.565099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.565263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.565290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.565455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.565480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.565639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.203 [2024-07-25 00:02:50.565664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.203 qpair failed and we were unable to recover it. 00:25:20.203 [2024-07-25 00:02:50.565801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.565826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.565970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.565996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.566140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.566166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.566325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.566351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.566520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.566545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.566656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.566681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.566822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.566847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.567018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.567047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.567186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.567211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.567336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.567362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.567511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.567536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.567713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.567738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.567882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.567907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.568021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.568047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.568207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.568232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.568381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.568406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.568541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.568566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.568732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.568757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.568922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.568947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.569090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.569115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.569267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.569293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.569434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.569460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.569597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.569622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.569737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.569763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.569899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.569924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.570067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.570093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.570235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.570267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.570402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.570428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.570567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.570592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.570733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.570760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.570872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.570898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.571041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.571067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.571211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.571237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.571353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.571378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.571520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.571549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.571679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.571705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.571817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.571842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.204 qpair failed and we were unable to recover it. 00:25:20.204 [2024-07-25 00:02:50.571984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.204 [2024-07-25 00:02:50.572009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.572152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.572177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.572343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.572369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.572482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.572507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.572617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.572644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.572813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.572839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.572986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.573012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.573123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.573148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.573286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.573312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.573422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.573448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.573560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.573586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.573711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.573737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.573878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.573903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.574046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.574071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.574184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.574209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.574326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.574351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.574467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.574493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.574642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.574667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.574807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.574832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.574974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.574999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.575143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.575170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.575289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.575316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.575440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.575466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.575612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.575638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.575761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.575791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.575902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.575928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.576049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.576075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.576212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.576237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.576399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.576425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.576592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.576617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.576752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.576777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.576895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.576920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.577033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.577058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.577167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.577192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.577326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.577353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.577468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.577493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.577611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.577636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.577739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.205 [2024-07-25 00:02:50.577764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.205 qpair failed and we were unable to recover it. 00:25:20.205 [2024-07-25 00:02:50.577916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.577956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.578116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.578143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.578265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.578291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.578458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.578485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.578629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.578655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.578793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.578820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.578960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.578986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.579128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.579154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.579294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.579321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.579435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.579460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.579571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.579597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.579715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.579740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.579880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.579907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.580020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.580051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.580167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.580193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.580340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.580366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.580482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.580507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.580652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.580678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.580822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.580847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.580988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.581014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.581130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.581155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.581271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.581299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.581449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.581475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.581592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.581617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.581738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.581764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.581876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.581902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.582043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.582068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.582250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.582279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.582447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.582473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.582587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.582613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.582733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.582759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.582879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.582905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.583020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.583046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.583201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.583240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.583395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.583422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.583589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.583615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.583761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.206 [2024-07-25 00:02:50.583787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.206 qpair failed and we were unable to recover it. 00:25:20.206 [2024-07-25 00:02:50.583927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.583953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.584099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.584124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.584297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.584324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.584439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.584469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.584617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.584643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.584786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.584811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.584951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.584976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.585118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.585144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.585311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.585338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.585458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.585483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.585599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.585624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.585748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.585775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.585946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.585971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.586111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.586136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.586300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.586327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.586495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.586521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.586659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.586684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.586807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.586832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.586944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.586970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.587112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.587136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.587284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.587311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.587430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.587456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.587568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.587593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.587738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.587764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.587878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.587903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.588069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.588095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.588230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.588263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.588424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.588450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.588618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.588643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.588762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.588788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.588931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.588961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.589074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.589100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.589272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.589299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.589438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.589463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.589608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.589633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.589803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.589828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.589970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.589996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.207 [2024-07-25 00:02:50.590113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.207 [2024-07-25 00:02:50.590139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.207 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.590257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.590282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.590395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.590420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.590539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.590564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.590681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.590706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.590822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.590848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.590968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.590993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.591143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.591168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.591279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.591304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.591420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.591445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.591609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.591636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.591807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.591832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.591965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.591990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.592135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.592160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.592304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.592330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.592465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.592490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.592625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.592651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.592754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.592780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.592920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.592945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.593089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.593114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.593226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.593279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.593451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.593477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.593596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.593622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.593763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.593789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.593910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.593935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.594051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.594077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.594208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.594234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.594351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.594376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.594489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.594514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.594630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.594656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.594774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.594799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.594936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.208 [2024-07-25 00:02:50.594961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.208 qpair failed and we were unable to recover it. 00:25:20.208 [2024-07-25 00:02:50.595099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.595124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.595246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.595274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.595389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.595415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.595526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.595552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.595665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.595690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.595802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.595827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.595971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.595997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.596132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.596158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.596267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.596294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.596434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.596460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.596628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.596653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.596813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.596838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.596980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.597005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.597138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.597163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.597312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.597338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.597489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.597515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.597643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.597669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.597783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.597809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.597971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.597996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.598104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.598129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.598295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.598321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.598458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.598483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.598620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.598645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.598775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.598800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.598906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.598931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.599039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.599064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.599215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.599260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.599384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.599411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.599557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.599583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.599705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.599731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.599873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.599900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.600038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.600064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.600217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.600248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.600363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.600388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.600510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.600535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.600691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.600716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.600834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.600860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.209 [2024-07-25 00:02:50.600982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.209 [2024-07-25 00:02:50.601007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.209 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.601122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.601147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.601290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.601317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.601430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.601456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.601598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.601623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.601732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.601758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.601869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.601894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.602061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.602086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.602203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.602230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.602349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.602374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.602490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.602516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.602688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.602713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.602878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.602903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.603046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.603072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.603215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.603240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.603382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.603407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.603522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.603549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.603694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.603720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.603859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.603884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.604048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.604078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.604194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.604219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.604342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.604368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.604513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.604538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.604698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.604723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.604841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.604866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.605035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.605060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.605175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.605200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.605352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.605378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.605516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.605542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.605660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.605686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.605794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.605819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.605953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.605978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.606096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.606121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.606268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.606294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.606435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.606460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.606626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.606651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.606770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.606796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.606915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.606940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.607050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.210 [2024-07-25 00:02:50.607076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.210 qpair failed and we were unable to recover it. 00:25:20.210 [2024-07-25 00:02:50.607213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.607239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.607380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.607405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.607540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.607565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.607704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.607730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.607864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.607890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.608059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.608084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.608192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.608217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.608347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.608378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.608519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.608544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.608688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.608714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.608854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.608880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.609005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.609045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.609224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.609259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.609421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.609448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.609565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.609591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.609733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.609759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.609904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.609931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.610084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.610110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.610231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.610266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.610411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.610439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.610555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.610582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.610730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.610756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.610897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.610923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.611092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.611118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.611257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.611285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.611429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.611456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.611576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.611602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.611772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.611797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.611936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.611962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.612076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.612102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.612271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.612298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.612425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.612451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.612596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.612622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.612764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.612790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.612936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.612963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.211 qpair failed and we were unable to recover it. 00:25:20.211 [2024-07-25 00:02:50.613135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.211 [2024-07-25 00:02:50.613161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.613272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.613298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.613438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.613464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.613611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.613637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.613787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.613813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.613935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.613961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.614102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.614130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.614254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.614280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.614400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.614425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.614572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.614597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.614736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.614761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.614877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.614902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.615043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.615068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.615218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.615258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.615400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.615425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.615582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.615607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.615759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.615784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.615897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.615922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.616084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.616109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.616220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.616251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.616417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.616442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.616582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.616608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.616751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.616776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.616881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.616906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.617023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.617048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.617181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.617207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.617323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.617349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.617484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.617509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.617646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.617672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.617835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.617861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.617990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.618016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.618135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.618162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.618281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.618309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.618480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.212 [2024-07-25 00:02:50.618505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.212 qpair failed and we were unable to recover it. 00:25:20.212 [2024-07-25 00:02:50.618643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.618668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.618824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.618850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.618970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.618996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.619133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.619159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.619296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.619323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.619462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.619488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.619610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.619636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.619757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.619783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.619947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.619972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.620114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.620139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.620278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.620304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.620448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.620473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.620618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.620644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.620780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.620805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.620979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.621004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.621144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.621169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.621332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.621358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.621470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.621496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.621636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.621661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.621798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.621827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.621958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.621984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.622130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.622155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.622281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.622307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.622426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.622451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.622586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.622611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.622737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.622762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.622880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.622906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.623032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.623058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.623212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.623237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.623382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.623408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.623550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.213 [2024-07-25 00:02:50.623575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.213 qpair failed and we were unable to recover it. 00:25:20.213 [2024-07-25 00:02:50.623734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.623759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.623923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.623948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.624090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.624116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.624283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.624309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.624466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.624491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.624597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.624622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.624732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.624757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.624902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.624928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.625073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.625099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.625221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.625253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.625391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.625417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.625567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.625592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.625708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.625734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.625845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.625871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.626038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.626063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.626174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.626203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.626349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.626375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.626514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.626540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.626707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.626733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.626844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.626870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.626982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.627007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.627151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.627176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.627292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.627318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.627453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.627479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.627598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.627624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.627763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.627788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.627958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.627983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.628126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.628152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.628273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.628299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.628446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.628471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.628577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.628603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.628712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.628737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.628880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.628906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.629019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.629044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.629179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.214 [2024-07-25 00:02:50.629205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.214 qpair failed and we were unable to recover it. 00:25:20.214 [2024-07-25 00:02:50.629348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.629374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.629511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.629536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.629679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.629706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.629847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.629873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.630020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.630045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.630211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.630237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.630381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.630407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.630531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.630561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.630715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.630741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.630879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.630904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.631021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.631046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.631191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.631217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.631368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.631393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.631539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.631565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.631713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.631738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.631873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.631898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.632038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.632064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.632204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.632230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.632361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.632386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.632498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.632524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.632664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.632690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.632840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.632865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.632986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.633012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.633154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.633180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.633315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.633341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.633482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.633508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.633647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.633672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.633840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.633865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.634036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.634062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.634202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.634228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.634357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.634383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.634520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.634545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.634659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.634684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.634807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.634832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.634953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.634978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.635093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.635119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.215 qpair failed and we were unable to recover it. 00:25:20.215 [2024-07-25 00:02:50.635232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.215 [2024-07-25 00:02:50.635263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.635387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.635412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.635551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.635576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.635720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.635746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.635859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.635885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.635998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.636023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.636131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.636156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.636274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.636300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.636413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.636438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.636579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.636604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.636715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.636740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.636883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.636908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.637025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.637050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.637160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.637185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.637300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.637327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.637467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.637492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.637632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.637657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.637794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.637819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.637958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.637983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.638125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.638150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.638300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.638326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.638492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.638518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.638634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.638660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.638801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.638826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.638969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.638996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.639142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.639167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.639309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.639335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.639448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.639473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.639585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.639611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.639732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.639757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.639871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.639896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.640041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.640066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.640206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.640232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.640379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.640406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.640543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.216 [2024-07-25 00:02:50.640569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.216 qpair failed and we were unable to recover it. 00:25:20.216 [2024-07-25 00:02:50.640681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.640706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.640838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.640863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.641002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.641027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.641170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.641195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.641345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.641375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.641492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.641518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.641685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.641711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.641851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.641877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.641983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.642009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.642149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.642174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.642313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.642339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.642455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.642480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.642599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.642624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.642735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.642761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.642903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.642928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.643072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.643098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.643249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.643275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.643390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.643416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.643565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.643590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.643761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.643786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.643895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.643920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.644039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.644066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.644175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.644202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.644357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.644383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.644499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.644524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.644668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.644693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.644802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.644827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.644934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.644959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.645064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.645089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.645209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.645234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.645353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.645378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.645489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.645518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.645659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.645685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.645792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.645817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.645927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.645952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.646093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.217 [2024-07-25 00:02:50.646118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.217 qpair failed and we were unable to recover it. 00:25:20.217 [2024-07-25 00:02:50.646259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.646285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.646425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.646451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.646566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.646591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.646710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.646736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.646902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.646927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.647062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.647087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.647227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.647274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.647412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.647438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.647560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.647585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.647758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.647784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.647939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.647964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.648076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.648102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.648247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.648272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.648381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.648407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.648563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.648589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.648730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.648755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.648875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.648902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.649070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.649096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.649214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.649239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.649381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.649407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.649561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.649586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.649704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.649730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.649841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.649866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.650023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.650049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.650194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.650219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.650331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.650357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.650474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.650499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.650664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.650689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.650828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.650853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.218 qpair failed and we were unable to recover it. 00:25:20.218 [2024-07-25 00:02:50.650993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.218 [2024-07-25 00:02:50.651018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.651150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.651175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.651317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.651344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.651486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.651511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.651657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.651682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.651846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.651872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.651982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.652009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.652185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.652211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.652321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.652347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.652455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.652481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.652625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.652651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.652796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.652821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.652956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.652982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.653106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.653132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.653249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.653275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.653383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.653409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.653552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.653577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.653746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.653771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.653883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.653908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.654048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.654073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.654214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.654240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.654393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.654419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.654554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.654579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.654717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.654742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.654888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.654914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.655052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.655079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.655261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.655288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.655429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.655455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.655600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.655625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.655744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.655770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.655880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.655905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.656012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.656038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.656159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.656184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.656323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.656349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.656466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.656495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.656610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.656635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.656782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.656807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.656948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.219 [2024-07-25 00:02:50.656973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.219 qpair failed and we were unable to recover it. 00:25:20.219 [2024-07-25 00:02:50.657085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.657110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.657250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.657276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.657441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.657466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.657591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.657616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.657724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.657751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.657919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.657944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.658055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.658080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.658227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.658257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.658425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.658450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.658564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.658590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.658717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.658742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.658884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.658909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.659045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.659071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.659199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.659224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.659397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.659422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.659529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.659554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.659718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.659743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.659877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.659903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.660019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.660044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.660218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.660249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.660370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.660395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.660565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.660590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.660733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.660758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.660900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.660929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.661096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.661122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.661260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.661286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.661422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.661447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.661584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.661610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.661750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.661776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.661919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.661946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.662112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.662137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.662254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.662280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.662398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.662423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.662564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.662589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.662702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.662727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.662894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.662919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.663034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.663059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.220 [2024-07-25 00:02:50.663179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.220 [2024-07-25 00:02:50.663205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.220 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.663335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.663361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.663480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.663506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.663674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.663700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.663824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.663850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.663962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.663992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.664162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.664188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.664303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.664329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.664443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.664470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.664616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.664641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.664782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.664807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.664946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.664971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.665077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.665102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.665211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.665251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.665364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.665390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.665528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.665553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.665695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.665720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.665873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.665898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.666042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.666068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.666186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.666211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.666394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.666419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.666567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.666593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.666706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.666733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.666849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.666875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.667009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.667035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.667169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.667195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.667341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.667367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.667516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.667541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.667674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.667699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.667844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.667869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.667994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.668020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.221 [2024-07-25 00:02:50.668131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.221 [2024-07-25 00:02:50.668157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.221 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.668277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.668302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.668438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.668463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.668583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.668608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.668751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.668776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.668892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.668917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.669030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.669055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.669167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.669193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.669307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.669333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.669477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.669501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.669648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.669673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.669842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.669867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.670007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.670032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.670198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.670223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.670366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.670391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.670527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.670552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.670716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.670741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.670881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.670906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.671041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.671066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.671240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.671284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.671427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.671452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.671594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.671619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.671752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.671777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.671936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.671961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.672070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.672095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.672233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.672265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.672428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.672453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.672618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.672643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.672798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.672823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.672943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.672968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.673081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.673107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.673249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.673275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.673387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.673412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.673580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.673606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.673717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.673742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.673879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.222 [2024-07-25 00:02:50.673905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.222 qpair failed and we were unable to recover it. 00:25:20.222 [2024-07-25 00:02:50.674043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.674068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.674189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.674215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.674349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.674375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.674509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.674534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.674698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.674723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.674863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.674889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.675006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.675032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.675176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.675202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.675394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.675419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.675561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.675587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.675730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.675755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.675865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.675890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.675998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.676023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.676163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.676188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.676291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.676321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.676439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.676465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.676574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.676600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.676717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.676742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.676886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.676912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.677055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.677080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.677217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.677247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.677386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.677411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.677582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.677607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.677714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.677740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.677880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.677905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.678037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.223 [2024-07-25 00:02:50.678062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.223 qpair failed and we were unable to recover it. 00:25:20.223 [2024-07-25 00:02:50.678200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.678225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.678373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.678399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.678523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.678548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.678714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.678740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.678856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.678881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.678992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.679017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.679157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.679183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.679295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.679322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.679491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.679516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.679657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.679682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.679820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.679845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.679991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.680017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.680158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.680183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.680347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.680374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.680478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.680503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.680619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.680649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.680796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.680822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.680990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.681016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.681183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.681208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.681353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.681379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.681502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.681529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.681668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.681694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.681815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.681840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.681956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.681983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.682131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.682157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.682273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.682300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.682416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.682442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.682559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.682584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.682723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.682748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.682918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.682943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.683083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.683108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.683267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.683293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.683434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.683460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.683604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.683631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.683746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.683772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.683914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.683940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.684054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.684079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.224 qpair failed and we were unable to recover it. 00:25:20.224 [2024-07-25 00:02:50.684190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.224 [2024-07-25 00:02:50.684215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.684365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.684391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.684530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.684555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.684703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.684729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.684865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.684890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.685033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.685059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.685205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.685230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.685356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.685381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.685516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.685541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.685680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.685706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.685822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.685848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.685962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.685987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.686124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.686149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.686267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.686293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.686438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.686464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.686584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.686610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.686753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.686778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.686919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.686945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.687087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.687113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.687264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.687291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.687438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.687464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.687600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.687625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.687744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.687770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.687904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.687929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.688094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.688119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.688224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.688256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.688370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.688396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.688536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.688562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.688697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.688722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.688852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.688878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.688991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.689016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.689127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.689153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.689322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.689348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.689470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.689495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.689632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.225 [2024-07-25 00:02:50.689658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.225 qpair failed and we were unable to recover it. 00:25:20.225 [2024-07-25 00:02:50.689767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.689792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.689965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.689990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.690129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.690154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.690306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.690332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.690471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.690497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.690633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.690658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.690777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.690802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.690942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.690967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.691110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.691135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.691276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.691302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.691466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.691491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.691600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.691630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.691771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.691796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.691910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.691935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.692084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.692109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.692228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.692266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.692375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.692401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.692533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.692558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.692719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.692744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.692885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.692911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.693054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.693079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.693248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.693274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.693426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.693451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.693560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.693586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.693728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.693753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.693871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.693896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.694012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.694038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.694176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.694202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.694328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.694355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.694494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.694520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.694635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.694662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.694789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.694815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.694966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.694992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.695099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.695124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.695233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.695267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.695383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.695409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.226 qpair failed and we were unable to recover it. 00:25:20.226 [2024-07-25 00:02:50.695534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.226 [2024-07-25 00:02:50.695560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.695685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.695710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.695846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.695875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.695987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.696013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.696153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.696178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.696295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.696321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.696445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.696470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.696640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.696665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.696816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.696842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.696960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.696986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.697105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.697131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.697272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.697298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.697412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.697438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.697549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.697574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.697683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.697708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.697848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.697873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.697995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.698023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.698136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.698162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.698319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.698345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.698460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.698486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.698659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.698684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.698823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.698849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.698989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.699014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.699150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.699175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.699288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.699314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.699455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.699481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.699618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.699643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.699757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.699782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.699950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.699976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.700117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.700146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.700284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.700310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.700420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.700446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.700553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.700579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.700707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.700733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.700879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.700904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.701043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.701069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.701183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.227 [2024-07-25 00:02:50.701209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.227 qpair failed and we were unable to recover it. 00:25:20.227 [2024-07-25 00:02:50.701325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.701351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.701471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.701497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.701645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.701671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.701777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.701802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.701913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.701939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.702083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.702108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.702269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.702295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.702409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.702434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.702584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.702610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.702745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.702770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.702878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.702903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.703082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.703107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.703216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.703255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.703398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.703424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.703532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.703558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.703699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.703724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.703840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.703866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.704035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.704060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.704177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.704202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.704341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.704367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.704486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.704513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.704627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.704653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.704766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.704793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.704929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.704954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.705093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.705118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.705260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.705287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.705422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.705447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.705553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.228 [2024-07-25 00:02:50.705578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.228 qpair failed and we were unable to recover it. 00:25:20.228 [2024-07-25 00:02:50.705696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.705722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.705858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.705883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.705996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.706021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.706147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.706174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.706314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.706340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.706483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.706508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.706651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.706677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.706798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.706824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.706963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.706988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.707158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.707183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.707327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.707353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.707471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.707496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.707613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.707638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.707749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.707774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.707881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.707906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.708019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.708044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.708180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.708205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.708325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.708351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.708493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.708518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.708634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.708659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.708771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.708798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.708911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.708936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.709049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.709074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.709190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.709214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.709364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.709391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.709501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.709526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.709662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.709687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.709827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.709853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.709993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.710018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.710159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.710184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.710295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.710321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.710429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.710454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.710606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.710635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.710747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.710773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.710937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.710963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.711073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.711098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.711205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.711230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.229 qpair failed and we were unable to recover it. 00:25:20.229 [2024-07-25 00:02:50.711363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.229 [2024-07-25 00:02:50.711388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.711534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.711559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.711700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.711725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.711838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.711864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.711975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.712000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.712112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.712137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.712281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.712307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.712448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.712473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.712590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.712615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.712793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.712819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.712964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.712989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.713129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.713154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.713296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.713322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.713457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.713482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.713627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.713652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.713766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.713791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.713901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.713926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.714068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.714093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.714211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.714236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.714361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.714388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.714532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.714558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.714675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.714700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.714835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.714864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.714980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.715007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.715170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.715195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.715314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.715341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.715488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.715513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.715630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.715656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.715795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.715821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.715944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.715969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.716125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.716150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.716295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.716332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.716476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.716501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.716613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.716638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.716757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.716782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.716897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.716924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.717047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.717073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.717187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.717213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.230 qpair failed and we were unable to recover it. 00:25:20.230 [2024-07-25 00:02:50.717357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.230 [2024-07-25 00:02:50.717384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.717495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.717520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.717653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.717678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.717817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.717843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.718023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.718048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.718158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.718184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.718294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.718320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.718434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.718460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.718625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.718651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.718754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.718779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.718895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.718920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.719084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.719109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.719277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.719303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.719416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.719441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.719613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.719638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.719766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.719791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.719904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.719929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.720045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.720070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.720234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.720269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.720410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.720437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.720553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.720579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.720687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.720712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.720822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.720848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.720960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.720985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.721124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.721149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.721307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.721345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.721502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.721534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.721691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.721718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.721858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.721885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.722029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.722056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.722175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.722208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.722334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.722362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.722529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.722555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.722670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.722695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.722810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.722835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.722942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.722968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.231 [2024-07-25 00:02:50.723106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.231 [2024-07-25 00:02:50.723132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.231 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.723266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.723292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.723399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.723425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.723576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.723601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.723742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.723767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.723913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.723939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.724078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.724103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.724234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.724266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.724409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.724434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.724548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.724574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.724738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.724763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.724872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.724897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.725013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.725038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.725180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.725205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.725357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.725382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.725496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.725523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.725648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.725682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.725811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.725838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.725958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.725989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.726162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.726189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.726307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.726335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.726458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.726484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.726604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.726631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.726750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.726776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.726948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.726974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.727095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.727120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.727251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.727279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.727400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.727429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.727560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.727587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.727723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.727755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.727875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.727901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.232 [2024-07-25 00:02:50.728041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.232 [2024-07-25 00:02:50.728067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.232 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.728209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.728236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.728389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.728415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.728532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.728558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.728697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.728722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.728831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.728858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.728976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.729003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.729120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.729145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.729275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.729303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.729475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.729503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.729642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.729681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.729833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.729860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.729987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.730014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.730129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.730155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.730282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.730310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.730453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.730480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.730598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.730624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.730758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.730784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.730947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.730985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.731128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.731156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.731279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.731312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.731459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.731485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.731614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.731641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.731814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.731840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.731960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.731988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.732105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.732136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.732258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.732285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.732404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.732430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.732571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.732597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.732740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.732766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.732877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.732904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.733056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.733095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.733216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.733248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.233 [2024-07-25 00:02:50.733393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.233 [2024-07-25 00:02:50.733418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.233 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.733525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.733550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.733662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.733688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.733839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.733864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.733982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.734008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.734146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.734171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.734301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.734330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.734453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.734479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.734593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.734620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.734766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.734791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.734934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.734962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.735081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.735108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.735255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.735282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.735423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.735449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.735565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.735590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.735730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.735755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.735893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.735919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.736062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.736088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.736218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.736249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.736374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.736403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.736535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.736561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.736688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.736714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.736911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.736943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.737066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.737093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.737205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.737231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.737358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.737385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.737531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.737557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.737692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.737718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.737826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.737851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.738002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.738028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.738181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.738220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.738366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.738394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.738540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.738572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.738704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.234 [2024-07-25 00:02:50.738730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.234 qpair failed and we were unable to recover it. 00:25:20.234 [2024-07-25 00:02:50.738870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.738897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.739051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.739077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.739238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.739293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.739417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.739444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.739591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.739618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.739768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.739795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.739940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.739965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.740073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.740099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.740211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.740237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.740389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.740415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.740551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.740576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.740717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.740743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.740877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.740916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.741042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.741070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.741212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.741239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.741388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.741414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.741564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.741591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.741711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.741737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.741895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.741922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.742034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.742059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.742171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.742196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.742318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.742345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.742490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.742516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.742627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.742652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.742797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.742824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.742939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.742971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.743146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.743172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.743287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.743315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.743428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.743454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.743590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.743616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.743756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.743782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.743921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.743947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.744091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.744119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.744299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.744326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.744447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.744472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.744618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.744643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.744778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.235 [2024-07-25 00:02:50.744804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.235 qpair failed and we were unable to recover it. 00:25:20.235 [2024-07-25 00:02:50.744921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.744946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.745082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.745109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.745227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.745260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.745383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.745409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.745547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.745572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.745686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.745711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.745846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.745872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.746020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.746046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.746161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.746190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.746340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.746367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.746480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.746506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.746649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.746675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.746807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.746833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.747008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.747034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.747146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.747172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.747319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.747350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.747468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.747494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.747665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.747691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.747819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.747844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.747975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.748000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.748156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.748182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.748328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.748355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.748507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.748533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.748705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.748731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.748871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.748897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.749026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.749053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.749180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.749206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.749353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.749379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.749522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.749549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.749725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.749751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.749893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.749920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.750063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.750089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.750207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.750236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.750364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.236 [2024-07-25 00:02:50.750389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.236 qpair failed and we were unable to recover it. 00:25:20.236 [2024-07-25 00:02:50.750529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.750555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.750669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.750695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.750837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.750863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.751004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.751029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.751144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.751172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.751320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.751346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.751454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.751481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.751589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.751615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.751734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.751760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.751873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.751900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.752047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.752073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.752216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.752249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.752426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.752452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.752558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.752583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.752727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.752753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.752893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.752919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.753087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.753113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.753233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.753267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.753412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.753438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.753560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.753587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.753715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.753742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.753852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.753883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.754000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.754028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.754171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.754197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.754360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.754401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.754581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.754608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.754726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.754751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.754892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.754918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.755063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.755089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.755194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.755220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.755361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.755388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.755499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.755525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.237 qpair failed and we were unable to recover it. 00:25:20.237 [2024-07-25 00:02:50.755635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.237 [2024-07-25 00:02:50.755660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.755788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.755813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.755955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.755981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.756133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.756159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.756284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.756312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.756455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.756481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.756583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.756608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.756749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.756774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.756887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.756913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.757023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.757049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.757198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.757224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.757346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.757372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.757481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.757506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.757620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.757646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.757815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.757841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.757950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.757975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.758077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.758107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.758257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.758284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.758406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.758432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.758575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.758601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.758749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.758775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.758918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.758943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.759062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.759087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.759196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.759221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.759382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.759408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.759546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.759571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.759681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.759707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.759816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.759842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.759958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.759984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.760114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.760152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.760308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.760336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.760480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.760506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.760655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.760682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.238 [2024-07-25 00:02:50.760833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.238 [2024-07-25 00:02:50.760859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.238 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.761023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.761049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.761180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.761206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.761349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.761376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.761494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.761519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.761683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.761715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.761837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.761863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.762006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.762033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.762154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.762179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.762301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.762328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.762452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.762483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.762606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.762632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.762761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.762786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.762894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.762923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.763074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.763102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.763275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.763308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.763448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.763487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.763650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.763688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.763811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.763839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.763956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.763982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.764102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.764128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.764274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.764301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.764422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.764447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.764563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.764589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.764718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.764743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.764888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.764913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.765026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.765051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.239 [2024-07-25 00:02:50.765171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.239 [2024-07-25 00:02:50.765196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.239 qpair failed and we were unable to recover it. 00:25:20.523 [2024-07-25 00:02:50.765345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.523 [2024-07-25 00:02:50.765374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.523 qpair failed and we were unable to recover it. 00:25:20.523 [2024-07-25 00:02:50.765498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.523 [2024-07-25 00:02:50.765528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.523 qpair failed and we were unable to recover it. 00:25:20.523 [2024-07-25 00:02:50.765645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.523 [2024-07-25 00:02:50.765672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.523 qpair failed and we were unable to recover it. 00:25:20.523 [2024-07-25 00:02:50.765792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.523 [2024-07-25 00:02:50.765818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.523 qpair failed and we were unable to recover it. 00:25:20.523 [2024-07-25 00:02:50.765934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.523 [2024-07-25 00:02:50.765960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.523 qpair failed and we were unable to recover it. 00:25:20.523 [2024-07-25 00:02:50.766084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.523 [2024-07-25 00:02:50.766113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.523 qpair failed and we were unable to recover it. 00:25:20.523 [2024-07-25 00:02:50.766256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.523 [2024-07-25 00:02:50.766284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.523 qpair failed and we were unable to recover it. 00:25:20.523 [2024-07-25 00:02:50.766408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.523 [2024-07-25 00:02:50.766436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.523 qpair failed and we were unable to recover it. 00:25:20.523 [2024-07-25 00:02:50.766550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.523 [2024-07-25 00:02:50.766576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.523 qpair failed and we were unable to recover it. 00:25:20.523 [2024-07-25 00:02:50.766695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.523 [2024-07-25 00:02:50.766726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.523 qpair failed and we were unable to recover it. 00:25:20.523 [2024-07-25 00:02:50.766861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.523 [2024-07-25 00:02:50.766887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.523 qpair failed and we were unable to recover it. 00:25:20.523 [2024-07-25 00:02:50.767008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.523 [2024-07-25 00:02:50.767034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.523 qpair failed and we were unable to recover it. 00:25:20.523 [2024-07-25 00:02:50.767152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.523 [2024-07-25 00:02:50.767180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.523 qpair failed and we were unable to recover it. 00:25:20.523 [2024-07-25 00:02:50.767293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.523 [2024-07-25 00:02:50.767320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.523 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.767437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.767463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.767602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.767628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.767782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.767808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.767916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.767943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.768060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.768087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.768203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.768229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.768358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.768384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.768511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.768537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.768652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.768677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.768813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.768839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.768976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.769001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.769145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.769171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.769306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.769333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.769475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.769501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.769639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.769664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.769769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.769794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.769929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.769954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.770117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.770142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.770287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.770314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.770454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.770479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.770614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.770639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.770789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.770815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.770946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.770975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.771119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.771144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.771301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.771327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.771448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.771474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.771651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.771690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.771849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.771877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.772042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.772068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.772212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.772239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.772398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.772430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.772577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.772603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.772718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.772745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.772853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.772879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.773043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.773076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.773196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.773223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.524 qpair failed and we were unable to recover it. 00:25:20.524 [2024-07-25 00:02:50.773377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.524 [2024-07-25 00:02:50.773403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.773548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.773573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.773739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.773765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.773901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.773926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.774039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.774066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.774209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.774234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.774385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.774410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.774524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.774549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.774667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.774694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.774841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.774867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.775007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.775032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.775176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.775202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.775355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.775383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.775527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.775560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.775734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.775760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.775873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.775898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.776012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.776039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.776214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.776251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.776372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.776398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.776543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.776576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.776723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.776748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.776870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.776900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.777048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.777075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.777221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.777254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.777377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.777403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.777541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.777567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.777686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.777711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.777859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.777884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.778030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.778055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.778172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.778197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.778321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.778348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.778494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.778520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.778667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.778692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.778807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.778832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.778971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.778996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.779109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.779135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.779290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.779316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.779458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.779485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.779633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.779660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.779776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.779801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.779917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.779946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.780090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.525 [2024-07-25 00:02:50.780116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.525 qpair failed and we were unable to recover it. 00:25:20.525 [2024-07-25 00:02:50.780266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.780294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.780439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.780464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.780573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.780598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.780770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.780796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.780929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.780955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.781070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.781096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.781246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.781272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.781380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.781406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.781521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.781547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.781668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.781693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.781802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.781828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.781939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.781965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.782085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.782111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.782227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.782257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.782373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.782399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.782510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.782535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.782672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.782697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.782836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.782861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.783004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.783030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.783171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.783196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.783308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.783334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.783452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.783478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.783621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.783646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.783788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.783813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.783962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.783987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.784113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.784142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.784254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.784280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.784426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.784453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.784617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.784643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.784812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.784838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.784945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.784971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.785095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.785120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.785238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.785269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.785412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.785438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.785587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.785613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.785716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.785743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.785885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.785911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.786041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.786066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.786185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.786210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.786367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.786393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.786529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.786554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.786661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.786686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.786846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.786872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.787013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.787038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.787182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.787208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.787367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.787393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.787503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.787529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.526 qpair failed and we were unable to recover it. 00:25:20.526 [2024-07-25 00:02:50.787697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.526 [2024-07-25 00:02:50.787723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.787870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.787896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.788036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.788062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.788176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.788202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.788359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.788386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.788521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.788550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.788658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.788684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.788825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.788851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.788988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.789013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.789126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.789151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.789297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.789324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.789435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.789461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.789569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.789594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.789747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.789772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.789953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.789993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.790114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.790141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.790305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.790332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.790488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.790514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.790664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.790692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.790844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.790871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.790988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.791015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.791181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.791206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.791328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.791354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.791491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.791517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.791634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.791660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.791773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.791798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.791914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.791940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.792080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.792106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.792213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.792238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.792364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.792389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.792534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.792559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.792668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.792693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.792876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.792906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.793014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.793039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.793209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.793234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.793358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.793385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.793555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.793580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.793734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.793761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.793903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.793929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.794039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.527 [2024-07-25 00:02:50.794066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.527 qpair failed and we were unable to recover it. 00:25:20.527 [2024-07-25 00:02:50.794175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.794201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.794371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.794410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.794564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.794593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.794706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.794733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.794903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.794930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.795075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.795102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.795256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.795283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.795397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.795424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.795572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.795599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.795771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.795797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.795942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.795971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.796112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.796138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.796286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.796312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.796452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.796478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.796617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.796643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.796791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.796816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.796984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.797010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.797150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.797176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.797319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.797345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.797489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.797519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.797657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.797683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.797825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.797851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.797989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.798014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.798156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.798181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.798296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.798322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.798466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.798492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.798632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.798658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.798794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.798819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.798932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.798959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.799100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.799126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.799237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.799283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.799427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.799454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.799574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.799600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.799771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.799797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.799922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.799947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.800060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.800085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.800199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.800224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.800400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.800426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.800537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.800563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.800701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.800727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.800839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.800866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.801038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.801063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.801182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.801208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.801352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.801378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.801515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.801540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.801680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.801706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.801848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.801875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.528 qpair failed and we were unable to recover it. 00:25:20.528 [2024-07-25 00:02:50.801991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.528 [2024-07-25 00:02:50.802017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.802152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.802177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.802286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.802313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.802422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.802447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.802564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.802589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.802709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.802748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.802892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.802919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.803061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.803087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.803229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.803264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.803380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.803406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.803516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.803542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.803651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.803676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.803715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230e230 (9): Bad file descriptor 00:25:20.529 [2024-07-25 00:02:50.803845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.803871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.803984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.804009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.804151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.804176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.804329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.804355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.804470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.804496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.804662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.804687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.804832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.804859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.805000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.805026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.805162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.805187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.805340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.805366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.805505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.805530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.805633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.805658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.805800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.805826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.805934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.805959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.806102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.806128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.806252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.806290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.806450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.806476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.806620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.806646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.806812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.806837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.806977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.807003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.807134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.807174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.807333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.807366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.807511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.807536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.807655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.807680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.807822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.807848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.807965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.807991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.808128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.808153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.808299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.808324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.808477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.808502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.808644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.808669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.808846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.808872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.809009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.809034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.809156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.809182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.809326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.809363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.809509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.529 [2024-07-25 00:02:50.809534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.529 qpair failed and we were unable to recover it. 00:25:20.529 [2024-07-25 00:02:50.809678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.809703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.809873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.809898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.810012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.810037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.810201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.810227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.810396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.810435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.810592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.810621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.810797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.810824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.810945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.810973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.811092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.811118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.811271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.811299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.811427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.811455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.811574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.811600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.811767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.811793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.811931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.811957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.812099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.812126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.812291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.812329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.812484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.812510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.812627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.812654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.812766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.812792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.812929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.812961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.813108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.813134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.813260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.813294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.813420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.813445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.813565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.813591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.813735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.813762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.813905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.813932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.814081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.814107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.814253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.814280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.814405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.814432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.814577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.814605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.814750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.814776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.814944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.814970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.815090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.815115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.815283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.815321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.815475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.815501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.815629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.815655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.815801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.815826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.815980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.816006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.816145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.816170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.816287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.816313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.816458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.816484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.816604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.816630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.816741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.816766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.816937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.816962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.817107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.817133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.530 qpair failed and we were unable to recover it. 00:25:20.530 [2024-07-25 00:02:50.817272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.530 [2024-07-25 00:02:50.817298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.817444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.817482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.817604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.817630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.817774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.817800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.817973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.817998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.818133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.818159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.818302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.818328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.818474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.818502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.818653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.818678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.818820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.818845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.818983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.819008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.819148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.819174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.819312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.819338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.819479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.819506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.819627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.819653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.819778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.819803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.819941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.819967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.820103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.820128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.820270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.820296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.820454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.820480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.820617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.820642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.820756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.820781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.820957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.820983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.821097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.821122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.821270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.821296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.821403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.821429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.821542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.821567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.821675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.821701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.821849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.821878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.822015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.822041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.822156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.822181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.822334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.822360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.822476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.822502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.822680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.822705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.822818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.822844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.822981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.823007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.823128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.823154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.823268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.823294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.823446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.823472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.823611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.823637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.823750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.823775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.531 [2024-07-25 00:02:50.823892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.531 [2024-07-25 00:02:50.823917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.531 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.824057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.824083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.824199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.824224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.824387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.824413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.824557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.824583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.824696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.824721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.824825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.824850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.824990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.825016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.825161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.825187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.825329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.825366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.825508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.825533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.825640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.825666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.825830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.825855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.825984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.826024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.826167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.826206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.826365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.826392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.826534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.826560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.826711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.826738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.826899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.826924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.827071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.827098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.827236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.827268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.827382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.827408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.827579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.827604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.827725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.827750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.827873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.827898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.828048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.828075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.828216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.828247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.828370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.828396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.828570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.828596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.828738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.828763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.828874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.828901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.829047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.829074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.829230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.829276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.829406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.829432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.829579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.829605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.829747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.829774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.829918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.829944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.830062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.830089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.830211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.830238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.830397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.830424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.830591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.830617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.830767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.830793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.830937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.830962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.831105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.831130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.831257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.831306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.831450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.831476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.831622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.532 [2024-07-25 00:02:50.831648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.532 qpair failed and we were unable to recover it. 00:25:20.532 [2024-07-25 00:02:50.831757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.831783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.831931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.831956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.832133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.832159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.832280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.832307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.832450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.832475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.832617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.832642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.832785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.832811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.832929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.832959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.833074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.833100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.833236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.833268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.833406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.833431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.833583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.833622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.833795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.833823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.833955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.833982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.834150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.834177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.834331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.834358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.834476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.834502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.834641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.834668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.834810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.834836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.834993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.835020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.835158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.835184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.835356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.835395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.835517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.835544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.835675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.835701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.835842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.835867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.836013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.836040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.836183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.836208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.836382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.836408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.836552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.836578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.836720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.836745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.836866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.836891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.837029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.837054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.837221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.837254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.837369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.837394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.837522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.837556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.837711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.837737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.837906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.837932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.838065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.838091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.838226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.838258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.838400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.838425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.838531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.838556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.838668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.838693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.838810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.838837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.838977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.839002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.839149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.839175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.839319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.839358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.839535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.839563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.533 [2024-07-25 00:02:50.839680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.533 [2024-07-25 00:02:50.839706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.533 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.839885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.839911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.840069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.840095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.840239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.840273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.840410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.840436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.840577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.840603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.840753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.840780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.840924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.840951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.841064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.841090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.841260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.841299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.841427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.841456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.841576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.841602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.841743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.841769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.841907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.841933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.842052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.842082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.842198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.842226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.842404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.842431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.842539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.842565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.842668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.842694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.842844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.842870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.843034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.843060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.843235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.843275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.843398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.843424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.843593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.843619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.843752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.843778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.843892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.843919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.844064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.844089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.844249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.844288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.844439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.844468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.844612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.844638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.844760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.844788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.844927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.844954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.845062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.845089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.845229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.845262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.845396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.845423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.845566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.845592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.845758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.845784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.845905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.845932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.846076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.846103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.846222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.846255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.846436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.846462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.846610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.846636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.846767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.846793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.846960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.846986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.847125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.847150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.847291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.847331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.847506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.847544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.534 [2024-07-25 00:02:50.847710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.534 [2024-07-25 00:02:50.847737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.534 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.847849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.847875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.848012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.848038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.848184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.848210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.848338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.848365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.848514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.848540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.848684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.848709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.848831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.848860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.848997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.849022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.849165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.849191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.849317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.849344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.849488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.849513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.849629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.849655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.849768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.849794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.849927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.849952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.850067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.850092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.850219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.850248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.850362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.850388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.850527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.850552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.850696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.850723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.850873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.850899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.851066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.851092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.851258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.851284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.851420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.851445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.851575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.851601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.851723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.851748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.851855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.851880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.852020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.852046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.852183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.852208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.852352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.852378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.852491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.852517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.852685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.852710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.852848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.852874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.852985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.853010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.853150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.853176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.853321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.853347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.853459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.853485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.853599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.853625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.853735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.853761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.853873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.853898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.854044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.535 [2024-07-25 00:02:50.854069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.535 qpair failed and we were unable to recover it. 00:25:20.535 [2024-07-25 00:02:50.854200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.854240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.854423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.854451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.854596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.854622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.854772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.854798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.854941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.854967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.855115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.855141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.855257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.855284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.855402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.855429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.855567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.855593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.855731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.855757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.855926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.855952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.856072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.856097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.856220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.856257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.856386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.856412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.856527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.856553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.856697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.856724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.856867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.856895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.857069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.857096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.857255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.857283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.857426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.857452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.857595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.857620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.857742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.857769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.857939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.857964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.858122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.858160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.858311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.858338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.858509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.858536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.858702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.858729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.858878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.858905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.859044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.859070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.859206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.859232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.859350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.859377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.859549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.859575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.859742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.859769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.859893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.859919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.860089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.860116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.860260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.860286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.860428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.860454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.860565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.860591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.860733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.860758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.860901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.860927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.861043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.861068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.861233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.861264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.861372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.861398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.861540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.861565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.861703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.861729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.861838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.861865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.862004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.536 [2024-07-25 00:02:50.862030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.536 qpair failed and we were unable to recover it. 00:25:20.536 [2024-07-25 00:02:50.862168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.862194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.862335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.862373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.862521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.862548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.862700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.862725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.862848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.862874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.863010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.863035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.863171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.863197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.863330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.863357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.863523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.863549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.863688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.863714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.863826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.863852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.863995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.864020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.864166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.864191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.864359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.864398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.864551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.864578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.864750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.864777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.864913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.864939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.865089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.865115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.865232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.865268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.865410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.865436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.865584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.865609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.865727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.865753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.865897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.865922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.866058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.866083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.866228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.866258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.866429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.866455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.866601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.866627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.866763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.866793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.866951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.866977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.867122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.867147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.867258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.867284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.867396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.867421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.867574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.867599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.867754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.867779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.867900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.867925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.868092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.868118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.868238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.868270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.868439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.868465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.868607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.868632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.868771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.868796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.868932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.868957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.869071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.869096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.869272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.869298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.869426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.869453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.869620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.869646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.869812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.869837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.537 [2024-07-25 00:02:50.869973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.537 [2024-07-25 00:02:50.869998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.537 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.870165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.870190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.870337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.870363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.870508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.870535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.870712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.870737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.870846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.870873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.870983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.871010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.871176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.871201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.871346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.871372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.871490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.871517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.871634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.871660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.871813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.871838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.872003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.872029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.872149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.872174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.872280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.872306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.872448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.872473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.872627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.872652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.872784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.872809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.872964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.872990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.873134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.873159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.873307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.873333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.873494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.873523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.873696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.873722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.873862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.873887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.874037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.874062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.874232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.874262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.874373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.874398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.874567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.874593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.874764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.874789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.874939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.874964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.875106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.875131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.875236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.875271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.875391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.875417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.875559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.875585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.875720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.875745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.875896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.875922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.876072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.876096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.876266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.876292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.876422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.876448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.876564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.876589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.876726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.876752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.876888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.876913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.877059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.877084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.877193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.877218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.877395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.877434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.538 qpair failed and we were unable to recover it. 00:25:20.538 [2024-07-25 00:02:50.877558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.538 [2024-07-25 00:02:50.877585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.877756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.877783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.877922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.877948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.878098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.878124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.878237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.878271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.878397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.878424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.878580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.878618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.878768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.878794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.878939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.878965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.879085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.879111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.879284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.879311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.879483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.879511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.879643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.879669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.879835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.879861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.880001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.880028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.880173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.880199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.880374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.880406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.880573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.880599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.880721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.880747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.880915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.880941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.881085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.881111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.881255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.881282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.881429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.881454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.881593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.881619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.881761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.881787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.881909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.881935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.882082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.882110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.882253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.882279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.882419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.882445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.882617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.882642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.882771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.882797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.882963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.882989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.883115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.883143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.883313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.883339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.883455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.883481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.883615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.883641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.883759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.883786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.883953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.883979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.884122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.884147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.884261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.884288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.884466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.884492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.884610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.884636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.884783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.884810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.884940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.539 [2024-07-25 00:02:50.884971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.539 qpair failed and we were unable to recover it. 00:25:20.539 [2024-07-25 00:02:50.885131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.885159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.885302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.885328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.885446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.885472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.885611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.885636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.885754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.885779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.885924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.885949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.886065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.886091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.886235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.886265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.886383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.886409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.886516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.886542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.886705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.886731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.886870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.886896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.887018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.887046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.887223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.887257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.887423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.887450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.887587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.887612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.887760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.887786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.887924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.887949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.888089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.888116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.888229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.888262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.888409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.888435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.888576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.888601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.888714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.888741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.888888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.888913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.889081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.889106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.889229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.889260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.889398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.889428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.889567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.889592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.889699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.889724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.889834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.889859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.889999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.890024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.890141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.890166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.890334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.890360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.890477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.890503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.890648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.890675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.890788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.890813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.890953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.890978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.891133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.891159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.891297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.891324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.891467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.891493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.891617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.891642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.891757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.891784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.891905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.891930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.892048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.892074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.892219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.892250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.892369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.892394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.540 qpair failed and we were unable to recover it. 00:25:20.540 [2024-07-25 00:02:50.892509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.540 [2024-07-25 00:02:50.892534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.892693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.892719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.892853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.892880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.893035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.893060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.893202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.893227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.893367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.893406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.893537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.893575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.893702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.893734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.893874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.893900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.894016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.894041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.894186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.894211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.894340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.894368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.894534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.894559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.894701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.894727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.894871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.894896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.895061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.895086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.895201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.895226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.895391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.895416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.895535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.895560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.895705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.895732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.895871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.895897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.896040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.896065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.896204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.896230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.896400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.896438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.896629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.896667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.896810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.896837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.896981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.897007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.897142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.897168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.897342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.897370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.897484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.897512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.897632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.897657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.897801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.897826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.897970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.897997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.898167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.898193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.898383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.898428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.898610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.898638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.898755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.898781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.898947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.898973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.899114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.899140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.899268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.899296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.899450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.899477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.899650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.899676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.899806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.899844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.899996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.900023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.900153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.900178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.900342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.900368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.900520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.541 [2024-07-25 00:02:50.900546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.541 qpair failed and we were unable to recover it. 00:25:20.541 [2024-07-25 00:02:50.900687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.900717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.900867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.900893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.901005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.901031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.901184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.901209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.901359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.901385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.901528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.901553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.901723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.901748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.901859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.901884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.901999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.902024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.902144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.902171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.902297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.902323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.902472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.902497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.902663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.902688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.902808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.902835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.903008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.903033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.903145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.903172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.903336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.903377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.903529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.903558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.903672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.903698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.903841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.903867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.903996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.904023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.904151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.904179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.904327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.904354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.904501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.904527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.904674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.904700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.904845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.904871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.905019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.905045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.905167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.905197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.905320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.905348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.905515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.905540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.905672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.905697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.905840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.905867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.906037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.906062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.906200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.906225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.906384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.906409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.906550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.906575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.906717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.906742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.906884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.906909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.907059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.907086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.907230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.907263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.907382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.907410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.542 [2024-07-25 00:02:50.907589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.542 [2024-07-25 00:02:50.907616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.542 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.907741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.907769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.907937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.907963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.908077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.908103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.908271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.908298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.908467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.908493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.908641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.908669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.908813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.908838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.908978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.909004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.909145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.909170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.909298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.909324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.909471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.909497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.909641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.909667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.909786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.909817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.909985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.910012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.910154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.910179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.910311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.910338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.910478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.910503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.910615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.910641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.910754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.910780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.910925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.910950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.911097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.911122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.911260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.911297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.911416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.911442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.911589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.911615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.911757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.911784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.911951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.911976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.912121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.912146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.912261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.912287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.912402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.912427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.912535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.912561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.912706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.912731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.912879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.912905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.913043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.913070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.913213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.913238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.913428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.913454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.913577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.913603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.913724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.913750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.913918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.913944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.914118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.914144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.914297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.914323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.914469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.914495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.914636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.914662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.914773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.914798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.914910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.914935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.915107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.915133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.915303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.915330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.543 [2024-07-25 00:02:50.915498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.543 [2024-07-25 00:02:50.915523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.543 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.915634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.915660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.915805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.915830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.915994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.916020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.916136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.916161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.916308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.916334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.916449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.916479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.916622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.916647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.916781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.916805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.916954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.916979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.917099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.917123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.917265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.917291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.917437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.917463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.917603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.917627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.917747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.917772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.917911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.917935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.918077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.918104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.918250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.918277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.918467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.918493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.918632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.918657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.918775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.918801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.918944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.918969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.919110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.919135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.919308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.919336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.919449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.919474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.919641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.919667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.919804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.919830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.919995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.920020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.920155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.920180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.920338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.920367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.920519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.920546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.920691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.920717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.920884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.920909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.921059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.921085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.921223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.921254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.921434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.921460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.921616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.921641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.921786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.921812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.921956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.921981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.922123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.922149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.922304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.922330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.922457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.922484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.922629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.922655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.922803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.922829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.922973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.922999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.923147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.923172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.923307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.923338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.544 [2024-07-25 00:02:50.923486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.544 [2024-07-25 00:02:50.923513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.544 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.923655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.923681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.923819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.923844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.923963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.923988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.924130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.924155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.924293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.924330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.924471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.924496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.924613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.924639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.924789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.924814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.924923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.924948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.925092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.925118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.925270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.925295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.925440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.925465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.925614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.925640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.925762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.925788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.925930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.925955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.926082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.926123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.926256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.926284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.926432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.926457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.926597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.926622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.926767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.926792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.926930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.926955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.927073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.927100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.927217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.927251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.927411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.927436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.927604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.927629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.927756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.927781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.927898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.927923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.928069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.928096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.928226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.928259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.928408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.928433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.928583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.928609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.928774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.928799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.928968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.928993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.929137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.929164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.929330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.929356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.929502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.929527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.929645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.929670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.929789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.929815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.929935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.929961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.930136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.930161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.930310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.930336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.930457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.930482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.930592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.930618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.930750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.930775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.930923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.930948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.931075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.931101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.931234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.545 [2024-07-25 00:02:50.931268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.545 qpair failed and we were unable to recover it. 00:25:20.545 [2024-07-25 00:02:50.931432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.931457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.931596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.931621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.931733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.931758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.931902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.931927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.932041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.932067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.932188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.932217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.932345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.932371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.932537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.932563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.932681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.932706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.932875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.932900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.933041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.933066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.933233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.933265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.933412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.933438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.933584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.933609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.933724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.933749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.933895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.933921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.934055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.934080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.934223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.934255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.934437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.934462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.934607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.934632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.934758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.934785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.934905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.934930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.935039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.935066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.935234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.935268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.935379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.935404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.935574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.935599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.935703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.935728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.935846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.935873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.936011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.936036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.936205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.936230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.936379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.936405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.936536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.936561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.936706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.936736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.936884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.936909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.937055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.937080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.937218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.937250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.937372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.937397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.937536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.937561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.937681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.937706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.937848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.937873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.938037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.938063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.938201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.546 [2024-07-25 00:02:50.938226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.546 qpair failed and we were unable to recover it. 00:25:20.546 [2024-07-25 00:02:50.938359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.938384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.938565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.938604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.938754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.938783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.938931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.938957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.939082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.939109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.939257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.939284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.939454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.939479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.939605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.939633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.939787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.939813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.939955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.939980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.940136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.940161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.940327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.940353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.940497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.940522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.940666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.940691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.940831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.940858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.940972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.940997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.941136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.941162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.941304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.941334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.941481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.941505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.941628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.941654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.941819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.941844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.941984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.942009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.942122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.942149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.942291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.942317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.942484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.942510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.942651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.942676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.942793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.942818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.942927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.942952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.943125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.943150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.943282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.943307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.943422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.943447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.943569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.943594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.943713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.943738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.943893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.943918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.944061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.944086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.944222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.944252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.944391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.944416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.944557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.944582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.944691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.944716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.944861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.944886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.945005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.945030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.945180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.945205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.945328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.945353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.945495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.945520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.945695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.945724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.945891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.945915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.946058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.946084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.547 qpair failed and we were unable to recover it. 00:25:20.547 [2024-07-25 00:02:50.946195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.547 [2024-07-25 00:02:50.946220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.946366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.946391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.946536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.946561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.946674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.946699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.946810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.946835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.946970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.946996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.947164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.947190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.947331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.947358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.947467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.947491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.947635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.947661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.947799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.947824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.947939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.947964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.948084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.948109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.948223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.948253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.948396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.948421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.948566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.948591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.948734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.948759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.948896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.948922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.949072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.949097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.949259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.949285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.949426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.949450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.949587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.949612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.949737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.949762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.949900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.949924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.950045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.950069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.950191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.950217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.950372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.950398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.950569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.950593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.950730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.950755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.950890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.950915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.951028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.951052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.951195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.951220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.951353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.951393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.951543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.951570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.951708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.951734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.951848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.951873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.952053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.952079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.952201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.952226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.952380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.952407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.952558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.952585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.952695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.952720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.952868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.952894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.953036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.953060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.953170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.953195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.953333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.953359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.953470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.953495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.953610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.548 [2024-07-25 00:02:50.953635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.548 qpair failed and we were unable to recover it. 00:25:20.548 [2024-07-25 00:02:50.953753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.953778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.953925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.953951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.954093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.954119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.954264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.954290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.954432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.954458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.954572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.954596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.954742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.954767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.954911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.954936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.955077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.955101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.955234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.955264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.955401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.955427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.955592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.955616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.955785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.955810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.955953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.955978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.956096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.956122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.956289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.956316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.956467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.956492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.956659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.956684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.956829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.956858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.956998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.957024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.957189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.957214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.957326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.957352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.957467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.957492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.957607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.957633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.957773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.957799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.957932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.957958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.958129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.958154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.958265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.958292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.958441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.958466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.958575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.958600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.958747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.958772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.958910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.958935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.959076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.959102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.959238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.959268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.959403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.959429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.959571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.959596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.959759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.959784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.959953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.959978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.960143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.960168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.960313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.960339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.960476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.960501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.960642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.960667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.960806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.960831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.960969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.960995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.961158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.961184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.961350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.961394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.961549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.961578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.549 qpair failed and we were unable to recover it. 00:25:20.549 [2024-07-25 00:02:50.961694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.549 [2024-07-25 00:02:50.961721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.961862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.961888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.962032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.962057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.962201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.962227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.962381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.962407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.962550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.962575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.962714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.962740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.962891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.962916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.963056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.963081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.963225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.963258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.963404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.963430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.963570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.963595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.963768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.963794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.963940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.963967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.964105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.964130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.964253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.964280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.964436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.964461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.964632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.964658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.964780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.964807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.964949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.964974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.965143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.965169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.965274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.965301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.965447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.965472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.965641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.965667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.965790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.965817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.965963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.965989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.966131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.966155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.966272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.966298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.966468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.966493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.966663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.966690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.966834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.966860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.967008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.967046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.967173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.967200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.967356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.967382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.967524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.967549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.967669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.967695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.967865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.967890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.968028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.968053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.968194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.968224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.968372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.968398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.968516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.550 [2024-07-25 00:02:50.968542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.550 qpair failed and we were unable to recover it. 00:25:20.550 [2024-07-25 00:02:50.968713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.968738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.968885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.968910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.969046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.969071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.969195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.969220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.969385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.969411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.969530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.969556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.969706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.969733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.969873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.969898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.970065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.970091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.970233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.970272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.970391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.970417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.970568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.970607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.970765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.970794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.970916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.970942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.971062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.971089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.971197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.971222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.971375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.971401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.971515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.971543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.971681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.971706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.971826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.971852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.971992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.972017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.972165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.972191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.972312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.972340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.972479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.972505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.972668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.972699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.972854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.972879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.973008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.973033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.973210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.973235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.973408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.973451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.973613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.973655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.973827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.973871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.974012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.974038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.974181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.974207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.974334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.974360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.974486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.974511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.974656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.974683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.974813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.974856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.975002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.975028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.975191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.975229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.975392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.975419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.975558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.975584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.975729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.975754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.975927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.975953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.976093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.976118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.976296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.976322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.976438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.976464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.976657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.976682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.551 qpair failed and we were unable to recover it. 00:25:20.551 [2024-07-25 00:02:50.976866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.551 [2024-07-25 00:02:50.976893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.977035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.977059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.977198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.977223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.977345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.977371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.977492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.977538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.977682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.977707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.977830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.977854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.978023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.978050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.978181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.978208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.978385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.978410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.978527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.978553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.978698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.978723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.978916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.978944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.979124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.979153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.979325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.979351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.979497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.979522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.979686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.979714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.979868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.979896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.980061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.980090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.980286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.980312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.980455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.980480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.980626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.980668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.980792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.980819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.980985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.981013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.981178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.981204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.981322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.981348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.981490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.981535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.981740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.981827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.981961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.981988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.982178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.982206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.982346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.982374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.982533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.982565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.982750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.982778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.982985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.983037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.983217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.983251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.983409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.983435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.983579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.983604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.983759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.983787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.983944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.983972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.984186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.984213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.984386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.984411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.984533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.984559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.984713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.984737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.984906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.984931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.985070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.985095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.985218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.985249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.985385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.985409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.985547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.552 [2024-07-25 00:02:50.985572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.552 qpair failed and we were unable to recover it. 00:25:20.552 [2024-07-25 00:02:50.985717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.985741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.985880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.985922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.986084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.986111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.986278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.986304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.986448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.986473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.986588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.986614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.986793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.986820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.986979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.987006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.987158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.987187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.987351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.987376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.987492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.987516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.987665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.987690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.987832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.987856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.988047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.988071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.988213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.988237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.988389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.988414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.988550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.988574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.988741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.988765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.988956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.988995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.989144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.989171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.989311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.989338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.989479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.989507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.989688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.989731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.989924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.989952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.990115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.990141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.990282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.990309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.990437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.990481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.990659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.990685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.990833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.990860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.991026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.991052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.991197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.991225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.991406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.991431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.991555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.991579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.991715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.991740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.991873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.991901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.992052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.992079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.992239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.992285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.992464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.992499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.992632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.992661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.992820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.992863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.993051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.553 [2024-07-25 00:02:50.993080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.553 qpair failed and we were unable to recover it. 00:25:20.553 [2024-07-25 00:02:50.993286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.993331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.993492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.993521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.993710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.993752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.993920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.993964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.994110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.994137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.994264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.994292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.994461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.994504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.994667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.994710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.994849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.994893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.995035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.995061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.995213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.995239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.995431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.995477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.995637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.995679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.995873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.995930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.996077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.996103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.996212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.996238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.996386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.996434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.996590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.996634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.996809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.996852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.997021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.997047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.997193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.997219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.997419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.997463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.997630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.997674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.997846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.997875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.998048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.998074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.998252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.998297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.998453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.998482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.998602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.998630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.998786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.998814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.998935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.998964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.999114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.999143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.999300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.999330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.999496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.999523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.999662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.999706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:50.999877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:50.999907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:51.000068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:51.000095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:51.000246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:51.000273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:51.000442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:51.000472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:51.000605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:51.000634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:51.000750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:51.000777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:51.000928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:51.000955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:51.001141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:51.001169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:51.001358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:51.001384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:51.001580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:51.001627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:51.001794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:51.001842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:51.002000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:51.002044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:51.002189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.554 [2024-07-25 00:02:51.002216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.554 qpair failed and we were unable to recover it. 00:25:20.554 [2024-07-25 00:02:51.002399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.002444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.002599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.002643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.002833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.002893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.003014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.003041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.003184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.003210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.003349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.003394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.003561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.003606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.003776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.003805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.004011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.004061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.004196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.004222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.004397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.004425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.004569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.004597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.004768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.004810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.004956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.004983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.005154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.005180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.005369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.005415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.005573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.005622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.005781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.005824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.005965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.005991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.006134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.006160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.006325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.006354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.006528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.006571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.006706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.006750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.006921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.006947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.007059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.007085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.007251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.007278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.007435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.007463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.007624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.007667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.007819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.007862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.008002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.008028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.008151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.008178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.008309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.008354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.008498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.008530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.008694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.008722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.008872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.008899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.009054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.009079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.009220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.009259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.009427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.009452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.009623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.009650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.009777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.009806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.009987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.010016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.010176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.010204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.010385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.010412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.010555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.010583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.010751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.010793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.010957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.011001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.011173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.555 [2024-07-25 00:02:51.011200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.555 qpair failed and we were unable to recover it. 00:25:20.555 [2024-07-25 00:02:51.011358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.011388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.011518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.011545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.011729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.011758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.011959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.012007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.012162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.012190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.012330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.012355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.012572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.012621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.012773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.012800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.012958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.012986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.013135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.013162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.013360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.013386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.013542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.013569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.013755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.013783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.013965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.013994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.014178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.014206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.014351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.014376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.014545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.014574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.014753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.014782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.014934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.014962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.015110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.015137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.015330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.015356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.015522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.015548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.015677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.015705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.015861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.015892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.016072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.016100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.016253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.016297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.016446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.016472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.016613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.016638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.016826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.016854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.017014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.017041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.017203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.017227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.017386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.017412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.017564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.017592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.017745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.017770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.017913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.017937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.018099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.018124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.018279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.018305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.018423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.018448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.018612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.018637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.018758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.018782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.018902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.018927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.019069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.019095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.019259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.019284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.019398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.019424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.019543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.019568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.019716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.019744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.019890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.556 [2024-07-25 00:02:51.019917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.556 qpair failed and we were unable to recover it. 00:25:20.556 [2024-07-25 00:02:51.020075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.020101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.020291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.020317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.020460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.020485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.020655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.020688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.020893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.020920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.021062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.021089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.021257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.021300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.021438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.021463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.021580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.021623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.021755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.021784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.021960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.021988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.022157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.022183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.022355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.022382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.022525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.022551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.022703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.022729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.022897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.022921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.023036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.023062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.023204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.023228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.023393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.023423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.023595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.023620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.023752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.023794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.023975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.024003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.024167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.024192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.024334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.024359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.024539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.024563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.024676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.024702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.024841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.024866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.025033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.025061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.025218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.025250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.025409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.025435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.025580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.025611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.025794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.025819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.025936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.025961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.026072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.026096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.026216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.026260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.026434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.026459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.026569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.026595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.026772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.026798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.026958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.026986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.027168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.027195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.027364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.027389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.557 [2024-07-25 00:02:51.027534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.557 [2024-07-25 00:02:51.027559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.557 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.027725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.027751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.027921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.027946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.028121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.028150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.028308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.028348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.028509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.028533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.028647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.028673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.028847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.028875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.029063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.029088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.029280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.029308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.029481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.029508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.029676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.029701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.029816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.029842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.030029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.030054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.030219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.030255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.030387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.030415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.030613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.030640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.030832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.030857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.031015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.031041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.031196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.031224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.031413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.031439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.031602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.031630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.031766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.031794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.031957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.031982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.032116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.032143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.032270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.032299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.032492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.032518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.032704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.032731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.032885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.032914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.033058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.033084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.033232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.033283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.033429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.033455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.033617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.033642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.033762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.033788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.033962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.033990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.034123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.034147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.034262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.034287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.034453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.034478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.034644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.034669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.034802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.034831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.035012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.035040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.035169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.035193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.035338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.035364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.035561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.035589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.035735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.035759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.035905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.035930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.036076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.036101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.036251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.036277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.558 qpair failed and we were unable to recover it. 00:25:20.558 [2024-07-25 00:02:51.036487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.558 [2024-07-25 00:02:51.036514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.036660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.036687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.036829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.036853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.037021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.037047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.037179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.037207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.037370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.037396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.037535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.037578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.037712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.037740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.037879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.037904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.038041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.038085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.038249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.038277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.038422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.038448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.038559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.038583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.038757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.038781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.038893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.038919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.039090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.039115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.039292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.039319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.039478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.039505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.039614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.039657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.039782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.039810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.039953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.039979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.040092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.040116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.040314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.040339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.040493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.040520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.040684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.040717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.040871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.040898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.041062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.041089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.041256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.041284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.041432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.041460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.041622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.041648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.041768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.041809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.041965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.041994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.042131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.042157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.042309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.042335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.042480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.042521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.042679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.042703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.042842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.042870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.043005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.043033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.043191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.043215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.043395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.043421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.043607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.043634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.043816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.043841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.043965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.043990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.044126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.044151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.044274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.044318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.044469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.044509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.044632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.559 [2024-07-25 00:02:51.044658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.559 qpair failed and we were unable to recover it. 00:25:20.559 [2024-07-25 00:02:51.044800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.044825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.044993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.045018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.045186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.045214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.045357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.045383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.045493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.045519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.045685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.045714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.045907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.045932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.046060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.046086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.046202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.046228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.046385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.046410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.046524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.046549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.046718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.046743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.046913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.046937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.047079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.047121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.047289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.047316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.047428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.047453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.047595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.047635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.047808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.047836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.048025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.048050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.048170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.048194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.048336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.048362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.048549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.048575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.048692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.048733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.048893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.048921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.049111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.049136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.049325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.049354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.049509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.049537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.049678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.049704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.049846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.049871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.050013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.050037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.050221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.050252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.050415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.050442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.050595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.050621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.050775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.050799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.050972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.051000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.051177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.051204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.051393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.051419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.051535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.051560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.051696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.051721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.051899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.051924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.052066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.052090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.052229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.052265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.052437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.052463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.052626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.052653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.052786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.052815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.052957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.052982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.053149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.560 [2024-07-25 00:02:51.053188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.560 qpair failed and we were unable to recover it. 00:25:20.560 [2024-07-25 00:02:51.053327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.053352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.053467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.053492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.053630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.053670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.053790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.053816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.053953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.053978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.054092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.054117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.054267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.054293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.054412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.054437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.054577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.054602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.054737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.054765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.054954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.054982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.055169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.055197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.055324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.055353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.055512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.055537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.055713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.055738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.055881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.055907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.056049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.056074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.056263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.056292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.056421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.056448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.056608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.056633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.056793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.056821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.056971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.056999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.057173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.057199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.057352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.057377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.057524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.057550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.057671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.057696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.057835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.057860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.057979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.058004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.058124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.058149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.058299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.058324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.058467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.058509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.058698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.058723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.058881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.058908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.059030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.059058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.059220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.059252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.059410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.059437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.059619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.059647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.059779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.059808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.059953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.059977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.060094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.060118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.060266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.561 [2024-07-25 00:02:51.060292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.561 qpair failed and we were unable to recover it. 00:25:20.561 [2024-07-25 00:02:51.060473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.060499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.060642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.060670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.060831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.060856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.060970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.060994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.061200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.061227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.061363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.061389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.061530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.061572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.061731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.061755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.061921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.061947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.062086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.062111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.062260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.062286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.062426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.062452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.062567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.062610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.062788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.062814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.062979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.063005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.063150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.063176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.063313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.063339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.063500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.063526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.063675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.063716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.063878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.063906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.064064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.064089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.064274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.064317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.064444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.064473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.064641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.064670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.064790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.064814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.064954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.064979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.065124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.065149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.065321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.065364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.065494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.562 [2024-07-25 00:02:51.065521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.562 qpair failed and we were unable to recover it. 00:25:20.562 [2024-07-25 00:02:51.065689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.065714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.065900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.065927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.066078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.066105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.066239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.066284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.066406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.066432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.066607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.066634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.066823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.066849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.067012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.067039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.067224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.067268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.067432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.067457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.067624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.067649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.067788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.067815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.067974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.067999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.068112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.068137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.068297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.068326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.068489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.068513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.068659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.068684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.068864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.068892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.069054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.069079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.069219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.069267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.069432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.069462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.069630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.069655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.069845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.069874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.070030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.070057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.070193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.070217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.070366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.070391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.070549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.070575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.070732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.070758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.070898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.070941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.071099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.071127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.071296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.071321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.071469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.071495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.071616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.071641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.071790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.071814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.071947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.071976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.072147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.072175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.072340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.563 [2024-07-25 00:02:51.072365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.563 qpair failed and we were unable to recover it. 00:25:20.563 [2024-07-25 00:02:51.072526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.072554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.072747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.072772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.072890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.072914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.073059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.073084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.073254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.073283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.073444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.073469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.073631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.073655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.073804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.073830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.073964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.073989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.074175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.074203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.074376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.074401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.074547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.074572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.074699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.074724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.074836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.074862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.075003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.075028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.075139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.075179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.075304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.075332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.075471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.075495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.075632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.075656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.075789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.075817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.075954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.075979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.076122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.076146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.076305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.076333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.076489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.076514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.076650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.076675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.076812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.076847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.077036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.077061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.077171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.077195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.077348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.077374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.077584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.077609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.077766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.077794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.077978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.078002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.078145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.078170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.078288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.078331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.078512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.078540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.078699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.078724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.078868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.078912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.564 [2024-07-25 00:02:51.079062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.564 [2024-07-25 00:02:51.079090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.564 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.079278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.079304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.079472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.079500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.079673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.079701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.079830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.079854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.079992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.080017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.080183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.080211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.080382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.080407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.080598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.080626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.080782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.080810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.080976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.081001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.081159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.081186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.081341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.081371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.081542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.081568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.081689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.081714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.081857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.081886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.082026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.082051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.082164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.082188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.082359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.082384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.082528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.082554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.082662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.082686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.082831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.082857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.083004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.083029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.083175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.083200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.083381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.083424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.083567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.083592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.083735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.083760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.083905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.083932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.084075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.084099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.084253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.084296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.084475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.084503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.084699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.084724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.084840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.084865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.085009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.085034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.085177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.085203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.085346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.085371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.085560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.085587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.565 [2024-07-25 00:02:51.085763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.565 [2024-07-25 00:02:51.085788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.565 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.085949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.085977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.086128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.086156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.086313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.086339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.086489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.086514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.086657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.086682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.086831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.086856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.087039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.087067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.087188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.087216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.087365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.087391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.087560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.087585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.087749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.087779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.087921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.087946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.088061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.088085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.088263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.088289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.088461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.088486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.088630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.088655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.088815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.088858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.089044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.089069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.089232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.089267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.089393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.089420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.089580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.089605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.089727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.089752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.089871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.089895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.090062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.090087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.090228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.090260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.090457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.090484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.090649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.090674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.090794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.090820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.090930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.090954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.091090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.566 [2024-07-25 00:02:51.091116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.566 qpair failed and we were unable to recover it. 00:25:20.566 [2024-07-25 00:02:51.091284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.091313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.091486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.091511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.091686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.091711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.091843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.091871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.092030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.092058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.092218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.092248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.092387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.092412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.092548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.092573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.092724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.092748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.092893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.092918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.093106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.093135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.093297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.093322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.093440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.093465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.093599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.093626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.093779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.093803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.093949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.093978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.094188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.094213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.094358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.094383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.094494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.094519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.094636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.094661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.094803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.094827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.094972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.095012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.095191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.095219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.095362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.095387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.095505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.095530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.095674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.095701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.095843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.095867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.096009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.096048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.096208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.096236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.096380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.096404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.096550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.096575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.096729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.096754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.096867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.096892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.097073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.097100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.097233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.097269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.097426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.097450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.097570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.097596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.097736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.567 [2024-07-25 00:02:51.097761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.567 qpair failed and we were unable to recover it. 00:25:20.567 [2024-07-25 00:02:51.097875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.097900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.098043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.098068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.098236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.098284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.098476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.098502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.098634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.098666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.098819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.098847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.099029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.099053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.099226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.099270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.099461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.099489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.099629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.099658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.099802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.099844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.100037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.100062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.100209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.100235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.100409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.100438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.100619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.100646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.100786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.100811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.100920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.100945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.101081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.101110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.101281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.101307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.101446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.101472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.101640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.101667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.101853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.101878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.102014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.102057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.102186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.102216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.102361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.102386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.102529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.102572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.102726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.102754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.102920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.102945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.103068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.103110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.103225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.103260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.103398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.103423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.103562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.103607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.103735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.103764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.103934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.103959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.104078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.104103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.104262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.104291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.104431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.104456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.568 [2024-07-25 00:02:51.104602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.568 [2024-07-25 00:02:51.104628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.568 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.104777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.104805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.104966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.104990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.105128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.105153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.105314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.105343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.105477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.105501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.105620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.105644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.105769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.105797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.105941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.105966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.106132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.106174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.106333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.106359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.106493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.106518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.106656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.106698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.106823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.106851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.106993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.107018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.107182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.107225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.107437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.107465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.107627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.107652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.107768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.107793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.107944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.107972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.108106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.108132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.108274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.108300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.108477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.108505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.108651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.108676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.108785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.108810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.569 [2024-07-25 00:02:51.108959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.569 [2024-07-25 00:02:51.108985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.569 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.109157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.109182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.109295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.109320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.109434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.109461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.109584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.109609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.109736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.109780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.109964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.109992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.110124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.110148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.110287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.110313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.110491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.110519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.110689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.110715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.110819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.110843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.110987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.111015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.111174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.111198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.111334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.111360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.111480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.111506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.111659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.111685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.111826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.111851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.111978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.112006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.112142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.112166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.112288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.112314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.112422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.112447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.112569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.112594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.112710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.112735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.112913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.112940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.113101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.113126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.113239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.113270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.113386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.852 [2024-07-25 00:02:51.113411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.852 qpair failed and we were unable to recover it. 00:25:20.852 [2024-07-25 00:02:51.113533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.113558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.113694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.113735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.113861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.113888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.114021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.114046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.114150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.114175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.114343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.114372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.114529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.114554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.114662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.114687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.114832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.114857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.114997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.115026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.115139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.115164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.115283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.115309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.115450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.115474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.115596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.115621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.115754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.115782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.115967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.115991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.116151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.116179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.116317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.116346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.116513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.116537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.116678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.116702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.116845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.116887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.117046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.117071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.117188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.117214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.117404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.117433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.117568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.117592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.117762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.117786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.117952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.117980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.118126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.118151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.118292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.118318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.118512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.118540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.118696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.118722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.118837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.118879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.119034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.119061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.119225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.119269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.119459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.119488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.119664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.119692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.119844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.119873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.853 [2024-07-25 00:02:51.120045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.853 [2024-07-25 00:02:51.120072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.853 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.120231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.120267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.120406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.120431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.120604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.120644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.120811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.120839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.121003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.121029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.121183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.121210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.121397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.121425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.121571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.121597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.121766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.121808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.121962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.121989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.122161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.122185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.122334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.122360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.122534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.122562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.122751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.122775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.122888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.122913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.123025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.123051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.123192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.123216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.123336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.123361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.123545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.123573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.123757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.123781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.123967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.123995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.124184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.124211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.124389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.124416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.124533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.124558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.124669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.124693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.124800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.124824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.124972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.125012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.125129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.125157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.125319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.125344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.125492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.125520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.125684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.125708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.125852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.125877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.126022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.126046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.126234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.126272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.126415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.126440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.126585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.126626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.126786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.126813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.854 [2024-07-25 00:02:51.126981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.854 [2024-07-25 00:02:51.127006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.854 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.127155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.127179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.127295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.127320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.127437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.127464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.127582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.127606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.127776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.127804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.127997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.128022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.128154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.128181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.128360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.128389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.128529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.128554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.128720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.128763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.128883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.128912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.129097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.129123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.129287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.129316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.129471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.129499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.129658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.129683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.129875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.129903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.130058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.130086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.130262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.130305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.130426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.130451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.130582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.130609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.130743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.130767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.130908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.130949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.131098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.131126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.131289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.131315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.131455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.131479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.131653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.131681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.131817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.131843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.131966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.132006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.132160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.132192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.132387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.132413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.132535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.132561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.132669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.132693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.132833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.132859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.133020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.133049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.133175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.133202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.133400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.133426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.133587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.133615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.855 [2024-07-25 00:02:51.133797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.855 [2024-07-25 00:02:51.133825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.855 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.134004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.134029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.134184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.134212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.134378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.134403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.134520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.134544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.134666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.134691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.134844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.134871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.135010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.135035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.135151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.135176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.135340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.135368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.135546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.135570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.135731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.135759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.135920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.135948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.136115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.136140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.136328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.136356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.136515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.136543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.136683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.136709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.136833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.136858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.137033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.137065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.137231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.137262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.137425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.137449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.137570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.137596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.137771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.137796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.137940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.137964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.138146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.138170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.138313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.138339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.138479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.138521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.138676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.138703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.138840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.138865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.138999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.139024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.139165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.139192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.139333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.139358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.856 qpair failed and we were unable to recover it. 00:25:20.856 [2024-07-25 00:02:51.139512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.856 [2024-07-25 00:02:51.139552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.139710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.139737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.139892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.139916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.140033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.140059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.140175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.140201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.140326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.140351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.140467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.140507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.140669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.140697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.140892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.140916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.141051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.141078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.141204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.141231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.141403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.141428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.141575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.141599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.141739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.141769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.141943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.141967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.142087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.142111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.142255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.142281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.142421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.142445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.142585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.142610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.142721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.142747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.142857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.142881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.142988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.143013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.143153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.143177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.143341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.143367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.143532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.143560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.143714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.143742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.143873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.143898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.144029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.144068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.144215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.144256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.144450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.144476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.144641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.144670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.144792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.144820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.144965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.144990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.145155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.145181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.145313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.145339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.145486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.145513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.145702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.145730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.857 [2024-07-25 00:02:51.145911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.857 [2024-07-25 00:02:51.145939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.857 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.146103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.146129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.146291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.146319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.146448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.146483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.146674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.146700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.146841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.146865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.147004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.147029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.147223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.147256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.147380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.147407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.147573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.147614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.147749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.147774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.147912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.147937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.148135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.148163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.148304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.148330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.148438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.148464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.148658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.148686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.148820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.148845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.148992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.149034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.149182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.149211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.149361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.149386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.149499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.149524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.149726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.149754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.149894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.149919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.150086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.150112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.150304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.150333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.150517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.150543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.150733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.150761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.150915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.150944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.151113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.151139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.151273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.151303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.151464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.151493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.151660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.151686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.151810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.151852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.151976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.152005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.858 qpair failed and we were unable to recover it. 00:25:20.858 [2024-07-25 00:02:51.152223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.858 [2024-07-25 00:02:51.152257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.152448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.152473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.152578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.152619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.152786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.152812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.152924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.152948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.153112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.153140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.153298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.153324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.153468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.153492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.153624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.153649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.153753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.153783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.153901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.153926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.154101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.154129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.154257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.154283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.154451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.154476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.154588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.154614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.154759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.154784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.154892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.154933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.155057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.155085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.155230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.155261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.155401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.155426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.155537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.155563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.155672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.155696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.155865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.155890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.156040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.156081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.156248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.156274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.156412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.156437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.156605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.156630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.156766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.156791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.156901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.156926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.157078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.157103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.157253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.157279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.157427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.157452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.157622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.157650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.157780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.157805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.157917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.157942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.158073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.158101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.158263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.158289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.158401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.859 [2024-07-25 00:02:51.158426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.859 qpair failed and we were unable to recover it. 00:25:20.859 [2024-07-25 00:02:51.158571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.158597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.158776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.158803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.158954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.158997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.159178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.159206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.159374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.159400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.159556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.159586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.159750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.159778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.159941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.159966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.160103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.160145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.160327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.160356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.160518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.160543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.160713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.160742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.160884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.160924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.161094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.161119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.161261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.161287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.161418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.161448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.161580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.161604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.161749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.161774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.161910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.161938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.162096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.162121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.162272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.162300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.162481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.162509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.162671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.162696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.162846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.162871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.162979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.163004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.163156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.163181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.163296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.163323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.163466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.163491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.163628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.163652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.163764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.163808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.163961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.163989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.164129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.164154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.164294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.164337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.164489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.164517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.164645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.164670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.164861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.164889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.165023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.165051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.860 [2024-07-25 00:02:51.165178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.860 [2024-07-25 00:02:51.165220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.860 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.165397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.165423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.165578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.165606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.165761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.165786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.165955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.165981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.166186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.166214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.166375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.166400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.166588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.166616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.166763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.166792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.166963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.166988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.167123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.167149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.167296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.167323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.167466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.167491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.167608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.167634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.167802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.167848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.167981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.168006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.168175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.168217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.168406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.168435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.168579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.168606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.168731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.168757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.168908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.168933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.169125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.169150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.169340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.169368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.169525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.169554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.169734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.169759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.169879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.169904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.170043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.170068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.170217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.170250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.170444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.170472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.170626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.170654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.170815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.170840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.170950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.170976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.171179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.171205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.171379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.171405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.171570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.171598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.171748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.171777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.171939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.861 [2024-07-25 00:02:51.171964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.861 qpair failed and we were unable to recover it. 00:25:20.861 [2024-07-25 00:02:51.172129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.172154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.172267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.172293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.172455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.172481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.172591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.172618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.172785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.172813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.172945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.172971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.173112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.173137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.173279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.173308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.173449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.173474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.173618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.173643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.173781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.173806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.173971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.173997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.174102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.174145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.174282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.174312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.174455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.174481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.174614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.174640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.174785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.174811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.174950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.174975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.175117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.175158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.175317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.175347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.175475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.175500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.175668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.175709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.175861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.175889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.176017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.176042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.176239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.176274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.176430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.176458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.176593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.176618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.176785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.176811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.176986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.177014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.177163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.177188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.177334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.177375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.177530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.177558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.177685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.177711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.177857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.177883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.178020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.178045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.178153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.178178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.178370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.178399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.178527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.862 [2024-07-25 00:02:51.178555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.862 qpair failed and we were unable to recover it. 00:25:20.862 [2024-07-25 00:02:51.178712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.178738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.178848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.178873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.179021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.179046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.179237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.179268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.179378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.179403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.179598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.179626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.179776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.179807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.179921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.179946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.180083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.180108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.180227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.180266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.180412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.180439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.180636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.180661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.180776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.180802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.180943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.180968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.181108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.181133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.181249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.181275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.181417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.181461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.181628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.181653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.181829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.181855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.181999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.182024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.182174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.182201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.182340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.182380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.182538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.182567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.182709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.182735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.182904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.182933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.183114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.183158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.183281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.183308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.183431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.183458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.183620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.183665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.183856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.183900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.184026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.184055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.184180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.863 [2024-07-25 00:02:51.184208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.863 qpair failed and we were unable to recover it. 00:25:20.863 [2024-07-25 00:02:51.184374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.184401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.184619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.184672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.184826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.184854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.185008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.185036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.185162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.185188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.185341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.185368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.185553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.185581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.185713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.185780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.185990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.186018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.186147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.186175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.186341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.186368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.186529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.186557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.186714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.186742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.186920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.186947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.187102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.187134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.187350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.187377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.187522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.187548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.187709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.187737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.187897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.187925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.188104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.188137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.188289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.188315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.188429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.188456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.188601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.188626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.188770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.188795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.188906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.188932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.189110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.189140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.189313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.189340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.189445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.189471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.189593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.189619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.189737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.189762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.189943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.189986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.190198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.190224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.190396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.190422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.190574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.190601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.190765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.190793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.190913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.190941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.191099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.191127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.864 [2024-07-25 00:02:51.191252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.864 [2024-07-25 00:02:51.191278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.864 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.191398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.191424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.191559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.191584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.191692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.191717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.191894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.191935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.192116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.192145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.192278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.192310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.192450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.192475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.192614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.192642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.192864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.192893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.193053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.193081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.193228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.193277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.193394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.193421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.193564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.193609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.193772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.193800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.194009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.194070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.194228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.194263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.194456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.194485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.194654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.194680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.194794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.194819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.194952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.194981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.195107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.195137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.195345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.195371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.195530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.195558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.195709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.195735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.195845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.195872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.196035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.196063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.196225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.196270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.196393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.196419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.196585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.196610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.196922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.196951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.197144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.197173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.197323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.197349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.197460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.197487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.197681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.197709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.197843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.197872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.198055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.198083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.865 [2024-07-25 00:02:51.198269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.865 [2024-07-25 00:02:51.198312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.865 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.198429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.198455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.198563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.198588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.198773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.198800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.198959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.198987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.199144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.199172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.199338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.199364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.199521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.199549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.199716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.199741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.199854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.199880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.200055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.200084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.200260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.200305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.200428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.200454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.200643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.200672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.200890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.200918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.201076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.201104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.201277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.201310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.201426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.201450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.201574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.201600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.201800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.201829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.201994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.202040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.202167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.202197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.202373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.202400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.202515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.202541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.202681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.202707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.202874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.202901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.203057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.203085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.203213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.203248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.203416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.203441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.203555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.203580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.203691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.203717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.203879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.203907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.204052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.204080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.204237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.204295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.204426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.204451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.204623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.204648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.204808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.204837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.204992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.205021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.866 qpair failed and we were unable to recover it. 00:25:20.866 [2024-07-25 00:02:51.205157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.866 [2024-07-25 00:02:51.205183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.205376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.205406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.205569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.205597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.205759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.205784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.205920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.205945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.206085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.206110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.206274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.206309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.206496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.206524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.206675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.206703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.206898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.206923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.207066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.207091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.207234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.207265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.207407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.207433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.207575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.207600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.207740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.207768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.207954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.207979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.208141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.208169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.208313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.208341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.208496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.208522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.208669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.208712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.208902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.208930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.209098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.209123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.209236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.209271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.209420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.209446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.209627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.209652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.209819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.209844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.210007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.210049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.210207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.210232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.210403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.210433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.210555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.210583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.210739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.210764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.210930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.210958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.211141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.867 [2024-07-25 00:02:51.211169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.867 qpair failed and we were unable to recover it. 00:25:20.867 [2024-07-25 00:02:51.211341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.211367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.211538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.211563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.211684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.211710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.211893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.211918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.212035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.212075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.212229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.212279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.212440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.212466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.212576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.212601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.212729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.212757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.212921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.212946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.213115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.213140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.213253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.213279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.213415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.213441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.213587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.213613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.213784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.213812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.213975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.214001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.214205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.214233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.214409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.214437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.214597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.214623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.214814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.214841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.215007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.215035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.215162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.215203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.215426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.215452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.215572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.215598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.215767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.215792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.215949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.215977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.216103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.216133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.216334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.216360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.216531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.216559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.216739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.216774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.216917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.216943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.217092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.217117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.217282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.217310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.217458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.217484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.217624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.217666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.217845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.217873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.218033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.218058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.218211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.868 [2024-07-25 00:02:51.218239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.868 qpair failed and we were unable to recover it. 00:25:20.868 [2024-07-25 00:02:51.218437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.218465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.218628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.218653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.218838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.218866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.219027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.219055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.219186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.219211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.219335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.219361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.219531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.219557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.219725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.219751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.219913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.219941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.220091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.220119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.220279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.220305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.220472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.220501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.220686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.220714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.220845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.220871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.221040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.221065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.221195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.221225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.221391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.221417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.221600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.221629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.221761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.221789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.221977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.222003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.222147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.222173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.222291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.222317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.222498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.222524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.222635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.222660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.222842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.222869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.223058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.223084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.223210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.223235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.223449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.223477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.223615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.223641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.223826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.223855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.224028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.224056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.224196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.224225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.224375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.224400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.224585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.224613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.224770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.224795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.224981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.225009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.225202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.225227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.869 qpair failed and we were unable to recover it. 00:25:20.869 [2024-07-25 00:02:51.225364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.869 [2024-07-25 00:02:51.225390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.225530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.225555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.225705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.225730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.225842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.225867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.225980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.226005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.226123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.226149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.226320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.226346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.226503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.226530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.226657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.226686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.226824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.226849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.226966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.226992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.227114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.227139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.227279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.227311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.227432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.227457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.227598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.227625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.227825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.227851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.227992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.228018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.228124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.228149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.228318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.228344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.228467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.228492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.228658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.228687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.228859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.228885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.229020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.229046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.229188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.229214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.229415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.229440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.229592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.229620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.229746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.229773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.229916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.229942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.230085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.230127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.230285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.230314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.230446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.230472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.230613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.230639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.230847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.230875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.231047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.231072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.231176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.231224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.231434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.231463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.231616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.231641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.231785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.870 [2024-07-25 00:02:51.231824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.870 qpair failed and we were unable to recover it. 00:25:20.870 [2024-07-25 00:02:51.231958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.231987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.232121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.232148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.232273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.232300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.232469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.232494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.232615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.232640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.232753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.232778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.232921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.232949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.233114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.233139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.233266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.233291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.233490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.233518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.233721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.233746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.233904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.233932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.234084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.234112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.234271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.234301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.234478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.234503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.234682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.234710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.234844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.234869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.235007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.235032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.235202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.235230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.235568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.235594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.235711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.235737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.235877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.235902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.236068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.236093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.236278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.236311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.236469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.236498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.236664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.236689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.236801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.236840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.236992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.237019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.237208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.237233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.237413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.237443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.237593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.237621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.237759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.237784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.237967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.237995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.238117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.238145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.238310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.238337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.238451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.238493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.238644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.238677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.871 qpair failed and we were unable to recover it. 00:25:20.871 [2024-07-25 00:02:51.238849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.871 [2024-07-25 00:02:51.238875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.239016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.239041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.239169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.239198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.239403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.239429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.239593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.239620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.239778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.239806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.239948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.239973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.240091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.240115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.240290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.240333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.240495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.240520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.240687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.240712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.240851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.240878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.241068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.241093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.241259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.241288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.241444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.241472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.241644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.241669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.241812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.241837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.241997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.242025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.242213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.242237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.242376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.242404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.242588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.242616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.242787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.242812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.242988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.243013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.243148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.243176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.243372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.243403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.243530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.243559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.243722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.243748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.243892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.243918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.244064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.244089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.244229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.244272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.244384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.244411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.244526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.244551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.872 [2024-07-25 00:02:51.244718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.872 [2024-07-25 00:02:51.244759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.872 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.244902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.244927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.245068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.245109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.245235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.245271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.245406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.245431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.245580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.245605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.245741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.245769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.245922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.245952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.246075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.246100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.246270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.246296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.246408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.246434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.246554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.246579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.246749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.246777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.246930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.246955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.247074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.247100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.247264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.247306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.247470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.247496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.247670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.247711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.247897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.247925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.248088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.248113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.248288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.248316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.248442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.248470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.248609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.248634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.248753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.248778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.248918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.248946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.249132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.249157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.249276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.249302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.249411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.249436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.249584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.249609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.249718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.249743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.249883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.249908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.250016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.250041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.250182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.250207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.250325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.250350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.250489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.250529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.250683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.250712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.250883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.250932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.251063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.251107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.873 qpair failed and we were unable to recover it. 00:25:20.873 [2024-07-25 00:02:51.251228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.873 [2024-07-25 00:02:51.251262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.251411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.251438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.251574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.251617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.251786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.251812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.251921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.251948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.252062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.252089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.252211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.252236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.252392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.252417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.252544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.252571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.252728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.252761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.252891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.252920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.253062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.253090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.253221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.253263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.253414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.253439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.253592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.253620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.253800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.253828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.253986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.254016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.254156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.254185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.254335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.254363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.254506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.254533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.254696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.254740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.254906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.254948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.255114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.255140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.255284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.255311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.255475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.255518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.255652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.255695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.255833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.255879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.255999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.256025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.256189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.256215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.256419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.256450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.256640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.256668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.256847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.256875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.257002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.257029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.257149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.257176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.257338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.257365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.257496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.257524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.257646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.257675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.257794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.874 [2024-07-25 00:02:51.257822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.874 qpair failed and we were unable to recover it. 00:25:20.874 [2024-07-25 00:02:51.258031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.258076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.258221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.258253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.258399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.258426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.258589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.258631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.258826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.258856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.259005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.259049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.259193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.259219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.259416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.259460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.259626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.259669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.259789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.259815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.259954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.259980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.260123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.260153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.260271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.260299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.260489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.260517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.260692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.260721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.260930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.260975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.261089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.261115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.261265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.261292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.261451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.261495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.261656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.261699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.261862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.261905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.262052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.262078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.262218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.262250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.262412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.262455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.262644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.262688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.262882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.262912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.263073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.263100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.263289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.263316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.263468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.263495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.263616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.263644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.263821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.263849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.263993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.264019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.264163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.264189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.264314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.264341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.264456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.264484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.264648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.264691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.264832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.264875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.875 [2024-07-25 00:02:51.265045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.875 [2024-07-25 00:02:51.265071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.875 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.265247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.265274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.265435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.265477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.265636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.265680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.265832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.265876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.266013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.266039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.266180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.266207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.266376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.266419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.266558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.266601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.266734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.266776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.266984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.267011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.267180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.267206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.267319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.267347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.267478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.267523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.267715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.267763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.267933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.267959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.268104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.268130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.268289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.268319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.268497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.268540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.268710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.268753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.268868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.268894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.269012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.269038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.269185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.269211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.269382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.269425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.269572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.269616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.269758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.269789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.269942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.269970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.270188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.270214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.270370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.270396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.270551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.270580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.270761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.270788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.270919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.270946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.271080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.271108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.271271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.271326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.271478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.271505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.271705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.876 [2024-07-25 00:02:51.271733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.876 qpair failed and we were unable to recover it. 00:25:20.876 [2024-07-25 00:02:51.272003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.272055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.272217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.272253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.272417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.272442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.272565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.272610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.272769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.272797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.273061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.273123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.273302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.273328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.273455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.273480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.273627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.273653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.273803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.273831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.273960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.273988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.274172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.274199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.274355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.274394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.274509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.274553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.274717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.274746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.275004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.275064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.275202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.275251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.275412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.275438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.275581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.275606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.275762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.275787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.275929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.275957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.276113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.276140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.276330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.276356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.276476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.276502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.276622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.276646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.276800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.276828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.276978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.277006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.277129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.277156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.277326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.277351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.277496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.277521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.277677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.277705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.277831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.277860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.278042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.278076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.278236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.278271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.278412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.278436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.877 [2024-07-25 00:02:51.278578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.877 [2024-07-25 00:02:51.278603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.877 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.278759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.278787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.278972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.278999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.279154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.279182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.279319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.279347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.279464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.279488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.279621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.279649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.279873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.279921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.280046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.280073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.280204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.280230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.280379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.280404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.280576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.280617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.280839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.280868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.280997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.281025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.281205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.281234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.281411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.281436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.281597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.281624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.281780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.281809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.281969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.281994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.282131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.282156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.282347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.282376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.282568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.282593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.282702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.282745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.282880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.282907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.283036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.283065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.283207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.283233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.283441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.283466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.283601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.283626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.283781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.283810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.283934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.283962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.284121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.284146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.284264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.284290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.284465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.284502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.284672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.284697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.284823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.284848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.284989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.285014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.285124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.285149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.285329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.285355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.878 [2024-07-25 00:02:51.285497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.878 [2024-07-25 00:02:51.285526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.878 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.285693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.285718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.285851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.285876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.286045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.286073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.286214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.286238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.286395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.286420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.286553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.286582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.286742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.286767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.286911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.286935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.287074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.287099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.287248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.287273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.287431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.287459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.287589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.287617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.287750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.287776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.287920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.287945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.288102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.288130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.288269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.288295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.288408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.288434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.288602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.288629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.288795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.288820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.288964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.289006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.289163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.289191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.289381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.289407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.289566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.289594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.289721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.289751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.289912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.289938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.290096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.290125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.290282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.290311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.290469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.290494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.290616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.290642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.290788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.290813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.290976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.291002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.291163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.291190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.291349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.291378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.291516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.291542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.291685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.291710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.291888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.291913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.292057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.292082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.292238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.879 [2024-07-25 00:02:51.292273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.879 qpair failed and we were unable to recover it. 00:25:20.879 [2024-07-25 00:02:51.292441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.292470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.292652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.292678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.292816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.292845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.293007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.293036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.293174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.293199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.293322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.293348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.293492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.293517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.293660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.293685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.293872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.293900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.294033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.294062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.294223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.294256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.294410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.294435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.294613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.294641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.294798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.294823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.294972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.294997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.295140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.295169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.295314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.295340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.295457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.295482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.295616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.295644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.295780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.295805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.295950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.295990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.296142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.296170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.296328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.296354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.296464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.296490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.296687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.296715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.296844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.296869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.297034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.297078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.297231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.297268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.297400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.297425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.297540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.297565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.297747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.297773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.297914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.297940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.298090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.298118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.298290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.298319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.298477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.298502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.298625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.298651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.298768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.298793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.880 [2024-07-25 00:02:51.298907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.880 [2024-07-25 00:02:51.298932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.880 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.299045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.299070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.299204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.299232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.299403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.299429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.299564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.299589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.299728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.299761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.299920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.299945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.300063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.300088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.300227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.300261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.300451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.300477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.300663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.300691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.300844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.300872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.301009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.301035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.301181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.301206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.301381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.301410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.301570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.301596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.301748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.301777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.301924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.301952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.302089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.302114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.302233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.302267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.302407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.302435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.302595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.302621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.302785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.302814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.302975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.303003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.303230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.303266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.303430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.303455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.303587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.303617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.303771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.303796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.303917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.303942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.304075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.304103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.304237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.304271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.304416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.304441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.304635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.881 [2024-07-25 00:02:51.304663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.881 qpair failed and we were unable to recover it. 00:25:20.881 [2024-07-25 00:02:51.304799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.304824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.304963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.305006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.305187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.305215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.305416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.305441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.305578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.305608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.305738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.305766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.305929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.305954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.306113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.306143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.306290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.306316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.306482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.306507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.306668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.306696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.306881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.306909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.307049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.307074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.307215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.307263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.307414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.307442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.307603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.307627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.307762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.307804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.307959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.307988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.308148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.308174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.308362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.308391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.308575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.308600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.308734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.308759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.308870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.308895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.309091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.309119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.309310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.309336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.309498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.309539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.309654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.309681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.309831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.309857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.310004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.310029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.310141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.310166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.310309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.310336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.310496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.310524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.310677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.310705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.310837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.310862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.311005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.311030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.311203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.311231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.311400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.311426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.311563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.882 [2024-07-25 00:02:51.311589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.882 qpair failed and we were unable to recover it. 00:25:20.882 [2024-07-25 00:02:51.311745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.311773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.311914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.311939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.312124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.312158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.312285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.312314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.312487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.312512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.312670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.312698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.312858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.312887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.313046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.313072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.313181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.313222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.313353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.313382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.313538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.313564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.313705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.313746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.313902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.313930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.314097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.314122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.314262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.314288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.314453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.314481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.314656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.314681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.314794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.314819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.314979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.315007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.315182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.315210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.315350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.315376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.315490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.315515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.315683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.315709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.315877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.315905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.316028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.316056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.316209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.316234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.316375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.316415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.316565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.316593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.316777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.316802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.316942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.316971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.317145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.317173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.317296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.317322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.317440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.317467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.317591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.317619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.317780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.317805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.317924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.317964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.318093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.318121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.318284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.883 [2024-07-25 00:02:51.318311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.883 qpair failed and we were unable to recover it. 00:25:20.883 [2024-07-25 00:02:51.318422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.318447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.318588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.318616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.318755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.318780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.318918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.318942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.319067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.319095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.319265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.319291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.319437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.319463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.319605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.319646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.319785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.319810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.319920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.319945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.320086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.320115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.320288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.320316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.320472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.320502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.320685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.320714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.320878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.320904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.321064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.321092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.321254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.321294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.321467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.321492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.321679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.321712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.321865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.321894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.322028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.322053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.322208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.322234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.322422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.322448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.322550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.322575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.322696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.322721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.322882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.322910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.323055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.323081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.323224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.323269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.323414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.323442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.323602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.323628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.323737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.323762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.323904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.323933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.324067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.324093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.324238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.324273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.324470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.324499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.324660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.324686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.324830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.324873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.325012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.325040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.884 qpair failed and we were unable to recover it. 00:25:20.884 [2024-07-25 00:02:51.325170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.884 [2024-07-25 00:02:51.325196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.325391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.325420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.325557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.325586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.325748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.325775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.325935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.325964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.326097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.326126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.326292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.326319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.326503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.326531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.326688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.326717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.326841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.326866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.327009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.327034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.327148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.327174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.327323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.327349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.327461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.327487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.327652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.327680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.327847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.327873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.327994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.328037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.328189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.328217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.328379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.328405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.328543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.328569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.328701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.328728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.328896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.328922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.329068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.329093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.329224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.329261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.329427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.329452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.329574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.329616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.329769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.329797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.329954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.329979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.330116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.330141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.330317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.330343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.330480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.330506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.330616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.330641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.330783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.330811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.330954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.330979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.331130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.331155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.331331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.331359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.331550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.331575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.885 [2024-07-25 00:02:51.331712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.885 [2024-07-25 00:02:51.331740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.885 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.331932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.331959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.332130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.332156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.332267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.332309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.332463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.332491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.332658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.332683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.332798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.332823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.332933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.332958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.333130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.333156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.333298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.333327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.333488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.333520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.333685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.333714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.333871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.333899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.334055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.334084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.334289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.334315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.334478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.334506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.334666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.334705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.334837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.334861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.335006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.335031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.335219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.335251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.335366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.335392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.335508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.335533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.335671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.335697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.335805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.335830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.335966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.336007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.336130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.336157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.336347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.336373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.336535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.336565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.336720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.336748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.336916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.336941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.337098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.337126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.337293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.337319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.337466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.337491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.886 [2024-07-25 00:02:51.337601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.886 [2024-07-25 00:02:51.337642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.886 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.337801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.337830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.338016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.338042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.338152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.338195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.338351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.338379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.338571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.338600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.338728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.338756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.338901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.338929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.339086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.339113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.339281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.339314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.339466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.339495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.339631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.339656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.339803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.339828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.339974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.340000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.340137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.340162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.340305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.340331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.340512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.340540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.340680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.340706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.340852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.340894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.341082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.341111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.341296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.341322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.341486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.341513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.341666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.341694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.341831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.341856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.341991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.342017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.342162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.342190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.342352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.342377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.342563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.342592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.342725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.342753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.342894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.342919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.343035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.343060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.343204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.343230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.343396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.343422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.343534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.343560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.343726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.343751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.343922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.343948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.344064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.344091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.344262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.344305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.887 [2024-07-25 00:02:51.344454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.887 [2024-07-25 00:02:51.344481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.887 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.344625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.344665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.344797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.344825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.344956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.344981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.345089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.345115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.345281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.345309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.345472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.345497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.345680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.345709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.345836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.345864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.345999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.346025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.346160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.346185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.346368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.346397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.346530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.346556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.346674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.346700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.346836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.346865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.346994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.347020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.347135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.347161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.347320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.347349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.347533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.347559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.347717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.347745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.347902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.347930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.348152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.348180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.348326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.348353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.348495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.348522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.348713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.348738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.348891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.348919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.349045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.349073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.349264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.349290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.349406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.349447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.349601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.349629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.349796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.349821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.349929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.349954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.350076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.350101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.350251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.350285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.350473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.350502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.350632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.350664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.350802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.350830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.350976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.351018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.888 [2024-07-25 00:02:51.351139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.888 [2024-07-25 00:02:51.351167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.888 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.351305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.351332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.351496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.351540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.351700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.351729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.351861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.351886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.351998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.352025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.352214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.352252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.352391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.352417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.352562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.352587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.352721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.352746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.352881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.352906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.353023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.353048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.353194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.353220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.353338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.353363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.353504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.353546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.353672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.353701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.353835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.353860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.353998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.354023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.354173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.354201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.354374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.354401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.354514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.354557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.354685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.354714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.354907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.354933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.355099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.355128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.355279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.355313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.355484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.355509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.355654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.355680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.355816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.355858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.355991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.356016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.356125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.356150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.356373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.356403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.356565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.356590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.356749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.356778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.356932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.356961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.889 [2024-07-25 00:02:51.357144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.889 [2024-07-25 00:02:51.357170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.889 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.357305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.357331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.357526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.357555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.357698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.357724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.357873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.357914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.358056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.358084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.358253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.358278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.358396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.358438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.358576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.358604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.358765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.358791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.358985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.359013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.359134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.359166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.359329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.359355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.359501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.359527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.359672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.359698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.359838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.359863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.359982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.360008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.360172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.360201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.360350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.360376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.360542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.360571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.360727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.360756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.360894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.360921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.361089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.361133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.361315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.361344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.361532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.361557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.361744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.361771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.361896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.361923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.362071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.362096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.362249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.362275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.362414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.362440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.362576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.362602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.362736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.362775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.362927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.362953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.363077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.363102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.363269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.363299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.363482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.363508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.890 [2024-07-25 00:02:51.363650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.890 [2024-07-25 00:02:51.363677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.890 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.363798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.363825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.363946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.363971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.364118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.364144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.364288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.364313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.364432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.364458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.364565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.364590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.364744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.364769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.364945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.364979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.365108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.365133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.365304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.365330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.365445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.365470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.365613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.365638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.365746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.365789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.365942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.365971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.366150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.366176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.366325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.366364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.366511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.366538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.366677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.366703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.366948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.367000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.367129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.367157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.367327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.367354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.367479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.367505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.367699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.367727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.367884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.367909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.368021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.368049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.368255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.368284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.368469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.368494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.368728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.891 [2024-07-25 00:02:51.368778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.891 qpair failed and we were unable to recover it. 00:25:20.891 [2024-07-25 00:02:51.368901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.368931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.369075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.369101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.369252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.369278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.369419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.369445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.369568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.369595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.369735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.369777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.369932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.369965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.370124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.370149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.370292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.370318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.370458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.370483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.370622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.370648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.370761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.370787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.370928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.370954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.371121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.371146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.371291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.371317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.371443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.371469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.371606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.371631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.371748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.371773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.371913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.371939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.372083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.372108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.372269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.372297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.372442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.372471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.372629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.372654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.372796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.372821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.372959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.372984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.373138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.373164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.373304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.373330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.373468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.373493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.373647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.373674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.373789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.373816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.373987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.374015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.374146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.374171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.374340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.374366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.374520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.374562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.374733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.374759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.374881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.374907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.375048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.375074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.892 qpair failed and we were unable to recover it. 00:25:20.892 [2024-07-25 00:02:51.375218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.892 [2024-07-25 00:02:51.375256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.375415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.375441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.375565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.375590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.375708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.375735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.375883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.375909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.376062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.376105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.376248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.376274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.376416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.376442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.376606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.376636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.376823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.376853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.377042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.377069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.377266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.377299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.377417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.377443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.377553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.377579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.377726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.377752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.377928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.377954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.378144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.378173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.378347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.378373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.378519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.378544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.378668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.378695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.378852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.378880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.379063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.379091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.379250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.379295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.379415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.379440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.379559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.379585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.379773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.379802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.379956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.379984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.380116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.380142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.380286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.380312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.380422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.380447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.380581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.380607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.380709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.380734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.380929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.380957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.381096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.893 [2024-07-25 00:02:51.381121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.893 qpair failed and we were unable to recover it. 00:25:20.893 [2024-07-25 00:02:51.381265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.381291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.381427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.381452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.381594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.381619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.381854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.381908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.382061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.382090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.382256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.382281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.382400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.382425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.382539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.382565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.382734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.382759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.382893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.382922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.383055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.383084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.383265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.383293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.383430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.383456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.383592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.383620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.383787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.383813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.383955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.384013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.384179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.384207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.384359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.384385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.384506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.384547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.384727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.384755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.384929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.384954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.385098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.385140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.385262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.385291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.385419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.385445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.385586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.385612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.385775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.385800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.385939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.385964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.386118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.386146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.386304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.386331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.386482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.386508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.386623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.386649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.386789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.386815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.386999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.387024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.387135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.387176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.387326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.387355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.387484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.894 [2024-07-25 00:02:51.387509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.894 qpair failed and we were unable to recover it. 00:25:20.894 [2024-07-25 00:02:51.387651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.387676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.387855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.387881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.387993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.388018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.388128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.388154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.388355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.388386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.388563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.388590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.388764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.388790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.388924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.388952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.389099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.389125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.389306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.389332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.389443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.389468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.389684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.389711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.389893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.389921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.390098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.390126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.390264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.390290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.390401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.390427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.390592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.390620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.390755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.390782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.390931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.390956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.391098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.391128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.391273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.391300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.391434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.391470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.391634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.391662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.391831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.391857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.391973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.392000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.392159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.392187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.392347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.392373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.392519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.392545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.392684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.392710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.392888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.392914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.393031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.393073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.393232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.393269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.393411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.393436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.895 [2024-07-25 00:02:51.393575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.895 [2024-07-25 00:02:51.393601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.895 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.393770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.393798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.393930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.393956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.394123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.394164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.394314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.394343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.394512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.394538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.394731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.394759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.394943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.394971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.395131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.395157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.395309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.395352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.395532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.395560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.395695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.395721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.395830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.395857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.396032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.396062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.396250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.396276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.396434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.396463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.396614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.396644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.396787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.396813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.396966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.396991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.397158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.397185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.397303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.397329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.397475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.397519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.397703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.397731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.397865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.397890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.397999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.398026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.398191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.398219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.398371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.398401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.398547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.398591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.398717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.398745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.398910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.398936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.399047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.399072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.399177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.399202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.399362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.399389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.399533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.399558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.399698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.399724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.399865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.399891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.400005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.400046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.400182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.896 [2024-07-25 00:02:51.400209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.896 qpair failed and we were unable to recover it. 00:25:20.896 [2024-07-25 00:02:51.400373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.400399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.400536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.400561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.400702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.400731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.400890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.400915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.401053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.401078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.401214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.401239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.401386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.401411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.401598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.401626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.401746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.401776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.401950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.401976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.402113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.402153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.402310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.402339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.402505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.402529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.402700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.402728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.402858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.402886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.403051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.403076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.403190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.403216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.403389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.403417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.403574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.403599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.403743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.403785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.403906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.403935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.404106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.404133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.404296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.404322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.404461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.404486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.404662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.404688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.404878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.404906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.405033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.405061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.405224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.405257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.405417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.405450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.405633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.405660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.405823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.405848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.405962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.406005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.406184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.406211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.406386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.406410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.406594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.406622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.406771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.406800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.406939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.406964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.407106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.407130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.407277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.407304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.407420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.407444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.897 [2024-07-25 00:02:51.407554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.897 [2024-07-25 00:02:51.407579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.897 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.407719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.407747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.407940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.407966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.408123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.408151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.408333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.408362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.408497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.408522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.408666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.408691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.408860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.408889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.409045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.409071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.409190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.409215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.409363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.409389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.409506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.409532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.409647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.409673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.409808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.409833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.409971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.409995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.410155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.410184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.410338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.410367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.410557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.410582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.410708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.410733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.410878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.410904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.411080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.411105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.411219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.411251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.411424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.411453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.411592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.411617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.411759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.411800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.411925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.411953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.412115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.412139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.412302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.412331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.412489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.412530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.412670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.412696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.412821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.412845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.413035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.413070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.413201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.413226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.413347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.413372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.413561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.413589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.413754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.413780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.413894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.413936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.414053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.414081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.414237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.414270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.414379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.414404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.414549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.414574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.898 qpair failed and we were unable to recover it. 00:25:20.898 [2024-07-25 00:02:51.414727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.898 [2024-07-25 00:02:51.414752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.414901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.414925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.415076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.415117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.415277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.415303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.415412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.415436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.415604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.415632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.415820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.415845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.416036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.416064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.416222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.416257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.416424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.416450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.416589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.416631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.416781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.416808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.416942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.416967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.417112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.417137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.417346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.417391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.417566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.417595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.417786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.417816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.417943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.417973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.418115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.418142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.418292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.418320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.418495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.418526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.418691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.418716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.418858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.418902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.419050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.419078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.419316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.419342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.419481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.419506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.419749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.419803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.419969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.419999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.420121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.420164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.899 [2024-07-25 00:02:51.420326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.899 [2024-07-25 00:02:51.420366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.899 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.420542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.420570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.420733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.420761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.420950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.421002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.421166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.421192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.421332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.421358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.421499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.421524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.421708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.421736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.421926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.421955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.422125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.422152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.422334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.422360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.422527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.422555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.422869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.422923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.423056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.423083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.423257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.423300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.423457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.423485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.423674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.423699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.423868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.423899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.424084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.424113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.424281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.424307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.424468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.424496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.424650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.424706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.424865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.424891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.425013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.425039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.425178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.425204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.425348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.425388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.425575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.425603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.425739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.425783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.425895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.425923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.426102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.426129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.426285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.426313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.900 [2024-07-25 00:02:51.426429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.900 [2024-07-25 00:02:51.426456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.900 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.426601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.426629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.426772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.426798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.426964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.427007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.427163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.427189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.427346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.427374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.427500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.427529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.427704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.427753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.427878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.427922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.428066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.428093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.428209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.428236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.428412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.428459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.428595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.428624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.428833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.428862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.428991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.429017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.429157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.429183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.429371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.429414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.429569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.429598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.429738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.429780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.429925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.429951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.430091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.430117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.430251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.430277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.430470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.430514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.430674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.430718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.430877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.430922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.431062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.431088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.431227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.431261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3481667 Killed "${NVMF_APP[@]}" "$@" 00:25:20.901 [2024-07-25 00:02:51.431465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.431495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.431681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.431724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 00:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:20.901 [2024-07-25 00:02:51.431853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.431899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.432046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 00:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:20.901 [2024-07-25 00:02:51.432073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.432193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.432220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 00:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.432385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 00:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:20.901 [2024-07-25 00:02:51.432419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.432530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 00:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:20.901 [2024-07-25 00:02:51.432557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.432711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.432737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.901 [2024-07-25 00:02:51.432850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.901 [2024-07-25 00:02:51.432876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.901 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.433022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.433048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.433164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.433190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.433359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.433389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.433535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.433578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.433734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.433778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.433921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.433948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.434068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.434094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.434219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.434251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.434413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.434457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.434631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.434660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.434787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.434814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.434929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.434956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.435074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.435102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.435254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.435281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.435411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.435455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.435589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.435633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.435780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.435806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.435975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.436002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.436118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.436145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.436319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.436349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 00:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=3482189 00:25:20.902 [2024-07-25 00:02:51.436464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.436491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 00:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:20.902 00:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 3482189 00:25:20.902 [2024-07-25 00:02:51.436607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.436634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.436753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.436781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 00:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 3482189 ']' 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.436927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 00:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.902 [2024-07-25 00:02:51.436954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.437063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 00:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:20.902 [2024-07-25 00:02:51.437090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.437204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.437232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b9 00:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.902 0 with addr=10.0.0.2, port=4420 00:25:20.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.437373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 00:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:20.902 [2024-07-25 00:02:51.437400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.437526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 00:02:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:20.902 [2024-07-25 00:02:51.437554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.437698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.437724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.437868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.437894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.438013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.438039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.438173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.438200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:20.902 qpair failed and we were unable to recover it. 00:25:20.902 [2024-07-25 00:02:51.438353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:20.902 [2024-07-25 00:02:51.438380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.438506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.438545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.438711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.438738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.438856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.438884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.439036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.439072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.439216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.439264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.439392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.439422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.439551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.439579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.439698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.439724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.439849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.439875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.440000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.440026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.440145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.440171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.440286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.440318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.440451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.440478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.440624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.440653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.440818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.440846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.441066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.441095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.441239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.441271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.441390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.441434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.441560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.441589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.441775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.441804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.441944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.441972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.442114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.442142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.442263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.442311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.442430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.442455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.442597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.442625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.442790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.442818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.442961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.442988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.443124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.443158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.443288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.443318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.443440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.443467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.443632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.443676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.443846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.443894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.444053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.444096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.444218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.444254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.187 [2024-07-25 00:02:51.444404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.187 [2024-07-25 00:02:51.444431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.187 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.444554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.444582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.444772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.444801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.444958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.445000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.445158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.445189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.445361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.445388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.445509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.445551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.445767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.445795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.445967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.445997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.446146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.446175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.446321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.446349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.446485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.446514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.446644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.446672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.446811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.446840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.446990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.447018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.447147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.447174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.447310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.447335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.447468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.447507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.447668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.447698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.447875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.447919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.448079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.448122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.448237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.448270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.448412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.448456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.448589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.448633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.448791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.448834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.448964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.449006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.449151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.449178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.449344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.449373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.449548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.449591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.449769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.449803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.449968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.450010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.450150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.450176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.450358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.450403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.450545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.450588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.450740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.450783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.450918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.450960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.451106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.451133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.451250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.188 [2024-07-25 00:02:51.451277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.188 qpair failed and we were unable to recover it. 00:25:21.188 [2024-07-25 00:02:51.451417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.451462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.452232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.452285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.452448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.452494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.453151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.453180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.453351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.453397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.454380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.454410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.454614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.454663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.455341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.455371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.455518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.455563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.455712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.455738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.455865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.455892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.456041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.456067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.456211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.456237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.456443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.456489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.456645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.456689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.456854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.456903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.457033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.457059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.457204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.457230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.457411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.457454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.457583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.457613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.457781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.457811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.457963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.457993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.458146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.458175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.458316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.458343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.458485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.458511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.458625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.458652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.458773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.458799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.458931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.458959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.459749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.459782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.459974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.460004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.460163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.460189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.460316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.460342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.460492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.460518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.460672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.460702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.460846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.460888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.461053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.461095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.461252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.461277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.189 [2024-07-25 00:02:51.461422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.189 [2024-07-25 00:02:51.461449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.189 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.461604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.461633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.461789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.461818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.461979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.462008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.462161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.462189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.462332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.462358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.462484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.462510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.462649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.462677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.462805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.462834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.462982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.463010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.463144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.463172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.463341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.463367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.463485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.463510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.463623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.463649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.463793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.463823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.464002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.464030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.464184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.464212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.464362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.464389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.464502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.464528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.464693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.464723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.464885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.464913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.465067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.465095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.465232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.465294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.465450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.465476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.465655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.465683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.465840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.465870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.465994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.466023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.466144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.466173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.466331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.466357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.466508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.466535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.466678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.466706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.466933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.466963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.467130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.467158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.467312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.467338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.467454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.467479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.467615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.467659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.467821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.467849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.468065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.468094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.190 [2024-07-25 00:02:51.468250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.190 [2024-07-25 00:02:51.468279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.190 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.468415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.468440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.468559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.468585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.468730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.468772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.468929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.468957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.469179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.469207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.469365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.469390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.469535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.469563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.469708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.469732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.469886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.469916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.470099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.470128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.470271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.470299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.470414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.470440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.470592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.470618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.470785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.470813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.470969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.470998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.471127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.471156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.471323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.471363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.471502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.471529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.471675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.471720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.471898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.471946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.472118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.472166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.472314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.472343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.472485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.472531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.472721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.472764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.472940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.472988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.473134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.473173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.473317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.473343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.473462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.473487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.473671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.473699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.473857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.473885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.191 qpair failed and we were unable to recover it. 00:25:21.191 [2024-07-25 00:02:51.474009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.191 [2024-07-25 00:02:51.474037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.474201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.474226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.474353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.474379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.474501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.474526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.474709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.474737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.474889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.474917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.475057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.475101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.475294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.475336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.475472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.475497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.475642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.475668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.475836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.475861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.476030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.476058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.476217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.476256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.476399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.476426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.476540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.476565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.476723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.476752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.476940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.476968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.477096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.477125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.477301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.477327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.477447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.477473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.477638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.477663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.477797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.477822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.478068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.478097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.478272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.478315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.478551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.478578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.478766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.478793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.478973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.479000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.479227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.479262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.479427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.479452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.479569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.479612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.479794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.479821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.479978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.480006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.480162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.480190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.480354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.480381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.480492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.480517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.480716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.480744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.480902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.480930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.481061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.481103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.192 qpair failed and we were unable to recover it. 00:25:21.192 [2024-07-25 00:02:51.481259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.192 [2024-07-25 00:02:51.481303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.481522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.481565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.481723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.481748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.481920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.481945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.482066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.482092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.482234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.482265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.482403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.482429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.482558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.482586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.482772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.482797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.482941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.482966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.483160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.483188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.483328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.483357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.483503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.483544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.483708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.483736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.483909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.483937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.484092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.484120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.484253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.484296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.484439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.484465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.484581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.484625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.484777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.484805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.485029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.485057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.485207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.485235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.485409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.485435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.485543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.485569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.485715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.485757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.485912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.485940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.486063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.486091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.486212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.486240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.486385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.486411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.486549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.486575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.486601] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:25:21.193 [2024-07-25 00:02:51.486680] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.193 [2024-07-25 00:02:51.486717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.486743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.486857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.486881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.487042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.487069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.487195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.487224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.487396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.487421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.487542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.487582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.487731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.487776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.193 qpair failed and we were unable to recover it. 00:25:21.193 [2024-07-25 00:02:51.487948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.193 [2024-07-25 00:02:51.487994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.488143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.488170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.488351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.488379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.488512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.488556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.488764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.488811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.488944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.488987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.489134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.489161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.489287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.489315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.489490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.489517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.489708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.489756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.489898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.489942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.490080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.490106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.490220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.490270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.490418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.490444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.490623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.490667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.490841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.490886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.491036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.491066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.491185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.491211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.491358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.491387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.491547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.491575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.491695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.491724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.491852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.491880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.492068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.492118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.492260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.492288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.492422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.492466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.492608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.492651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.492766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.492794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.492967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.493012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.493151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.493178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.493352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.493398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.493539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.493583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.493720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.493763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.493920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.493964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.494102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.494129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.494256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.494301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.494470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.494498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.494619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.494647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.494810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.194 [2024-07-25 00:02:51.494838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.194 qpair failed and we were unable to recover it. 00:25:21.194 [2024-07-25 00:02:51.494996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.495024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.495179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.495207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.495401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.495431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.495589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.495617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.495769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.495797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.495974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.496002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.496156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.496185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.496367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.496393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.496538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.496566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.496728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.496756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.496904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.496932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.497063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.497091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.497212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.497257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.497406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.497432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.497613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.497641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.497797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.497826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.498009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.498037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.498215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.498262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.498418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.498447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.498604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.498650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.498886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.498933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.499101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.499146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.499329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.499357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.499492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.499522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.499740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.499783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.499954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.499997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.500165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.500191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.500323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.500350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.500519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.500566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.500719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.500767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.500933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.500978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.501122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.501149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.501322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.501353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.501480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.501509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.501663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.501691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.501965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.502012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.502174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.502203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.502400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.502440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.195 qpair failed and we were unable to recover it. 00:25:21.195 [2024-07-25 00:02:51.502645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.195 [2024-07-25 00:02:51.502692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.502826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.502869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.503036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.503065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.503215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.503249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.503422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.503448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.503600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.503630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.503816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.503864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.503996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.504025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.504192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.504218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.504350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.504390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.504561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.504591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.504771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.504800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.504955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.504984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.505149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.505177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.505393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.505432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.505622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.505649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.505813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.505857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.505995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.506037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.506166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.506204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.506342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.506381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.506515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.506562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.506682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.506708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.506848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.506891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.507024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.507065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.507255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.507282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.507400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.507425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.507563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.507594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.507752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.507783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.507928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.507972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.508100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.508128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.508300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.508327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.508494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.508520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.196 [2024-07-25 00:02:51.508699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.196 [2024-07-25 00:02:51.508727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.196 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.508948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.508977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.509133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.509163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.509314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.509340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.509509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.509535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.509686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.509714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.509888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.509945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.510139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.510166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.510308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.510335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.510480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.510505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.510703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.510732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.510863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.510891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.511018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.511047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.511236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.511284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.511439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.511467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.511614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.511658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.511815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.511844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.512011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.512054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.512193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.512223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.512398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.512425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.512582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.512611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.512779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.512805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.512984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.513016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.513185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.513211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.513357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.513383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.513549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.513575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.513749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.513782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.513943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.513972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.514104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.514133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.514298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.514327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.514445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.514472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.514641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.514684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.514863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.514915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.515071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.515115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.515305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.515334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.515452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.515504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.515715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.515745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.515926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.515954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.197 [2024-07-25 00:02:51.516078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.197 [2024-07-25 00:02:51.516142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.197 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.516332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.516384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.516537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.516566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.516722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.516752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.516889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.516917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.517045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.517075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.517266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.517310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.517418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.517444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.517591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.517616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.517767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.517795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.517950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.517978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.518194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.518223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.518382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.518420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.518586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.518618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.518843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.518893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.519087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.519136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.519283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.519309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.519456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.519482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.519623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.519649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.519813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.519838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.520008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.520036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.520193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.520221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.520423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.520450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.520582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.520608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.520741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.520766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.520887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.520912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.521069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.521111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.521342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.521368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.521552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.521580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.521774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.521799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.521962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.521990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.522169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.198 [2024-07-25 00:02:51.522197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.522444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.522470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.522640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.522668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.522831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.522859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.523007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.523035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.523193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.523221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.523380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.523406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.198 qpair failed and we were unable to recover it. 00:25:21.198 [2024-07-25 00:02:51.523518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.198 [2024-07-25 00:02:51.523544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.523683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.523726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.523875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.523903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.524053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.524081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.524240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.524294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.524435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.524460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.524642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.524668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.524829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.524857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.525013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.525041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.525304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.525332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.525446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.525472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.525591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.525616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.525765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.525791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.525912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.525938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.526081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.526107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.526226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.526261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.526383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.526408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.526535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.526560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.526702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.526728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.526868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.526895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.527006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.527032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.527176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.527202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.527347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.527373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.527515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.527541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.527684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.527709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.527928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.527953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.528086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.528111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.528328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.528354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.528520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.528546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.528654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.528680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.528799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.528825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.528973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.528999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.529145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.529171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.529310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.529336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.529453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.529478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.529590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.529615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.529781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.529805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.529917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.529942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.199 [2024-07-25 00:02:51.530105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.199 [2024-07-25 00:02:51.530131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.199 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.530245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.530271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.530387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.530412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.530559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.530584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.530696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.530722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.530886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.530911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.531032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.531058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.531216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.531271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.531398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.531425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.531560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.531594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.531723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.531749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.531880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.531906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.532042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.532069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.532195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.532221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.532411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.532450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.532593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.532622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.532779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.532805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.532921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.532948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.533092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.533118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.533231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.533270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.533397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.533425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.533566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.533593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.533729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.533755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.533919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.533944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.534111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.534137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.534276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.534316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.534441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.534470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.534626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.534653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.534796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.534821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.534987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.535014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.535151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.535177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.535320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.535346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.535486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.535512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.200 qpair failed and we were unable to recover it. 00:25:21.200 [2024-07-25 00:02:51.535661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.200 [2024-07-25 00:02:51.535691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.535836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.535861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.535975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.536001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.536116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.536143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.536292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.536319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.536465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.536491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.536666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.536691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.536796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.536821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.536962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.536987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.537120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.537145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.537303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.537329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.537467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.537493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.537648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.537673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.537802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.537828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.537948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.537974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.538099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.538125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.538255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.538282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.538421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.538446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.538566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.538592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.538737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.538762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.538904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.538929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.539052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.539079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.539253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.539279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.539416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.539441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.539553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.539578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.539727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.539752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.539893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.539918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.540090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.540117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.540260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.540287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.540526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.540564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.540687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.540714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.540852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.540878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.540998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.541023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.541189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.541215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.541338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.541365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.541485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.541511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.541625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.541650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.201 [2024-07-25 00:02:51.541800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.201 [2024-07-25 00:02:51.541825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.201 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.541962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.541987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.542097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.542123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.542264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.542303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.542463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.542492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.542641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.542668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.542810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.542838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.542978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.543004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.543159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.543185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.543334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.543363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.543479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.543504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.543667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.543692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.543835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.543861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.543987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.544026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.544173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.544200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.544345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.544372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.544515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.544542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.544668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.544694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.544805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.544831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.544939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.544965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.545112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.545139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.545317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.545344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.545500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.545527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.545671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.545696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.545818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.545845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.545976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.546002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.546119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.546145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.546312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.546339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.546442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.546468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.546597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.546623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.546729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.546759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.546903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.546929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.547081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.547106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.547225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.547258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.547381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.547408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.547578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.547603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.547713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.547739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.547850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.547877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.548016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.202 [2024-07-25 00:02:51.548041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.202 qpair failed and we were unable to recover it. 00:25:21.202 [2024-07-25 00:02:51.548200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.548240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.548394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.548425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.548601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.548627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.548763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.548789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.548931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.548957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.549078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.549106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.549252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.549280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.549419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.549445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.549595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.549620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.549735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.549760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.549882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.549907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.550080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.550105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.550239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.550271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.550384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.550409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.550532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.550557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.550670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.550695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.550808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.550834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.550958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.550983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.551092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.551123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.551232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.551269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.551408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.551434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.551546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.551571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.551694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.551719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.551839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.551864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.552000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.552025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.552141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.552166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.552284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.552310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.552431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.552456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.552594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.552619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.552738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.552765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.552917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.552956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.553118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.553146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.553303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.553331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.553500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.553526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.553644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.553671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.553788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.553814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.553922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.553948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.203 [2024-07-25 00:02:51.554111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.203 [2024-07-25 00:02:51.554151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.203 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.554300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.554328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.554499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.554525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.554651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.554678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.554791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.554817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.554941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.554968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.555078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.555104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.555251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.555277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.555417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.555447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.555596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.555621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.555732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.555757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.555899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.555924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.556069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.556095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.556228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.556261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.556412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.556438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.556555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.556580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.556656] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.204 [2024-07-25 00:02:51.556704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.556728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.556840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.556865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.557021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.557047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.557191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.557216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.557348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.557387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.557518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.557557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.557747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.557774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.557934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.557962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.558151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.558178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.558328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.558355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.558474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.558502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.558644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.558670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.558833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.558859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.559014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.559041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.559162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.559188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.559331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.559357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.559504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.559529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.559642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.559669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.559784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.559809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.559933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.559960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.560069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.560095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.560264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.560291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.204 qpair failed and we were unable to recover it. 00:25:21.204 [2024-07-25 00:02:51.560456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.204 [2024-07-25 00:02:51.560482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.560623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.560650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.560787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.560813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.560986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.561013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.561140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.561180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.561317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.561347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.561466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.561495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.561638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.561664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.561807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.561832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.561948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.561972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.562129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.562155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.562301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.562326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.562467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.562493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.562634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.562659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.562775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.562800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.562938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.562963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.563079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.563105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.563246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.563272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.563387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.563412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.563559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.563585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.563724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.563749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.563890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.563915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.564028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.564054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.564219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.564250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.564372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.564412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.564566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.564594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.564742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.564769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.564943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.564970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.565109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.565135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.565259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.565287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.565405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.565432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.205 [2024-07-25 00:02:51.565584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.205 [2024-07-25 00:02:51.565609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.205 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.565751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.565776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.565898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.565923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.566036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.566061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.566199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.566225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.566347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.566375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.566532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.566559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.566674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.566700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.566857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.566883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.567025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.567051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.567160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.567192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.567338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.567366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.567512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.567538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.567686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.567711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.567823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.567848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.568019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.568044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.568167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.568192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.568304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.568330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.568440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.568466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.568690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.568716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.568865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.568890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.569011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.569037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.569205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.569234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.569387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.569412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.569552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.569577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.569688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.569713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.569854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.569880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.570029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.570054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.570182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.570207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.570335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.570362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.570472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.570497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.570620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.570647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.570789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.570815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.570955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.570981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.571108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.571134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.571268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.571308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.571457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.571486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.571633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.571660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.206 [2024-07-25 00:02:51.571819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.206 [2024-07-25 00:02:51.571845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.206 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.571995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.572022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.572160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.572187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.572343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.572371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.572510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.572537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.572654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.572680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.572823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.572848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.573008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.573049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.573206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.573235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.573428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.573467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.573621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.573651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.573829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.573856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.573977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.574005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.574129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.574157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.574295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.574322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.574433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.574460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.574600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.574626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.574771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.574796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.574926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.574952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.575090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.575116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.575238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.575271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.575394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.575419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.575558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.575588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.575731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.575758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.575897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.575923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.576039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.576064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.576206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.576231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.576389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.576416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.576527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.576552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.576692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.576718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.576853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.576879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.576992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.577018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.577155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.577181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.577317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.577358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.577511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.577539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.577690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.577717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.577861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.577888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.578025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.578052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.578194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.207 [2024-07-25 00:02:51.578220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.207 qpair failed and we were unable to recover it. 00:25:21.207 [2024-07-25 00:02:51.578335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.578363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.578478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.578505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.578616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.578643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.578786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.578813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.578959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.578985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.579105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.579133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.579293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.579321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.579485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.579524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.579675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.579703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.579847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.579874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.580014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.580045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.580189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.580215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.580338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.580365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.580478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.580505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.580643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.580670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.580778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.580804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.580963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.580990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.581120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.581147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.581265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.581294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.581444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.581471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.581606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.581633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.581742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.581769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.581882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.581908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.582036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.582064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.582216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.582250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.582369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.582395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.582565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.582589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.582715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.582742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.582963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.582989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.583128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.583154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.583297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.583324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.583438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.583463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.583684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.583726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.583904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.583930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.584073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.584098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.584236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.584266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.584407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.584433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.208 [2024-07-25 00:02:51.584588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.208 [2024-07-25 00:02:51.584627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.208 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.584780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.584807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.584925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.584951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.585074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.585100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.585214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.585239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.585384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.585410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.585549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.585575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.585690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.585716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.585832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.585857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.585981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.586008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.586151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.586178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.586297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.586328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.586443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.586468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.586628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.586653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.586796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.586822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.586957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.586983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.587120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.587146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.587294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.587320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.587476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.587502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.587651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.587676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.587786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.587811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.587954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.587980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.588121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.588149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.588292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.588318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.588462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.588487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.588604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.588630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.588741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.588767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.588920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.588959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.589136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.589165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.589328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.589368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.589520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.589548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.589694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.589722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.589891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.589918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.590043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.590070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.590217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.590249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.590370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.590396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.590509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.590535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.590703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.590728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.590836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.209 [2024-07-25 00:02:51.590860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.209 qpair failed and we were unable to recover it. 00:25:21.209 [2024-07-25 00:02:51.590996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.591021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.591146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.591171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.591325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.591351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.591485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.591512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.591634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.591660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.591806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.591832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.591969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.591994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.592105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.592131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.592277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.592303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.592457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.592482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.592622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.592646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.592753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.592779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.592913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.592938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.593055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.593080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.593240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.593286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.593407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.593440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.593553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.593579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.593735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.593762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.593874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.593901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.594033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.594061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.594189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.594216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.594374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.594401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.594519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.594547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.594692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.594718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.594859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.594885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.595022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.595048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.595169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.595194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.595307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.595333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.595447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.595472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.595591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.595616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.595762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.595788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.595901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.595926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.210 [2024-07-25 00:02:51.596071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.210 [2024-07-25 00:02:51.596097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.210 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.596232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.596263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.596377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.596402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.596557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.596583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.596697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.596723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.596859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.596884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.597033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.597058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.597171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.597196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.597328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.597354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.597469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.597494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.597616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.597647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.597764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.597790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.597933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.597959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.598108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.598133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.598258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.598284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.598454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.598479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.598619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.598644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.598756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.598781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.598899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.598924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.599061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.599086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.599223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.599255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.599390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.599416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.599543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.599571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.599690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.599716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.599856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.599882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.600038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.600078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.600229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.600268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.600451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.600478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.600623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.600650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.600786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.600813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.600960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.600986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.601103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.601131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.601300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.601327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.601432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.601458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.601569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.601595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.601705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.601731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.601898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.601923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.602056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.602085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.211 [2024-07-25 00:02:51.602205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.211 [2024-07-25 00:02:51.602231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.211 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.602359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.602385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.602524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.602551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.602662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.602687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.602820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.602845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.602967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.602993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.603155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.603180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.603313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.603339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.603454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.603479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.603621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.603648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.603792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.603818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.603925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.603950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.604095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.604120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.604288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.604328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.604461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.604489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.604609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.604635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.604790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.604816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.604926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.604953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.605103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.605128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.605248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.605276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.605407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.605433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.605573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.605599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.605744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.605770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.605878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.605903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.606025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.606051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.606220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.606260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.606401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.606435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.606585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.606613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.606759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.606786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.606929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.606955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.607074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.607100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.607251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.607279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.607396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.607422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.607569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.607596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.607738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.607764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.607935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.607961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.608101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.608126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.608269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.608295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.212 [2024-07-25 00:02:51.608442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.212 [2024-07-25 00:02:51.608468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.212 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.608611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.608637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.608794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.608821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.608937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.608964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.609111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.609136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.609259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.609287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.609426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.609452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.609596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.609622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.609737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.609765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.609880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.609907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.610045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.610072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.610215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.610246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.610363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.610390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.610509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.610535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.610674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.610700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.610818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.610844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.611019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.611045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.611189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.611215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.611368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.611394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.611511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.611537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.611680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.611705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.611820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.611847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.612015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.612040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.612180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.612207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.612364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.612391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.612564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.612589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.612728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.612753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.612866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.612892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.613010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.613039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.613151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.613178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f04000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.613349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.613389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.613551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.613579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.613713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.613741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.613924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.613950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.614072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.614099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.614239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.614279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.614391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.614416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.614583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.614609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.213 qpair failed and we were unable to recover it. 00:25:21.213 [2024-07-25 00:02:51.614727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.213 [2024-07-25 00:02:51.614755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.614868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.614895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.615005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.615033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.615155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.615194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.615367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.615396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.615522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.615561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.615675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.615700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.615878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.615903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.616016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.616042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.616192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.616220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.616376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.616402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.616534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.616561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.616752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.616779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.616945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.616971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.617086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.617113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.617286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.617314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.617436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.617461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.617648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.617688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.617844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.617872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.617982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.618007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.618127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.618152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.618304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.618332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.618480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.618506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.618739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.618766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.618912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.618938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.619084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.619110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.619228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.619264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.619413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.619439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.619556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.619582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.619731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.619757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.619922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.619953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.620072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.620098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.620226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.214 [2024-07-25 00:02:51.620267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.214 qpair failed and we were unable to recover it. 00:25:21.214 [2024-07-25 00:02:51.620395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.620434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.620587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.620614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.620751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.620776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.620935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.620960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.621100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.621126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.621267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.621294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.621407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.621432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.621606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.621631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.621798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.621823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.621994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.622019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.622191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.622217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.622358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.622387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.622509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.622536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.622653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.622680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.622849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.622875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.622994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.623021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.623165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.623193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.623334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.623361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.623514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.623542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.623666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.623693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.623858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.623884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.624027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.624055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.624172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.624199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.624358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.624386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.624556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.624594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.624745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.624772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.624889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.624915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.625064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.625090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.625206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.625251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.625366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.625392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.625550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.625575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.625704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.625731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.625855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.625883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.626007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.626033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.626150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.626176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.626298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.626325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.626470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.626496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.626655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.215 [2024-07-25 00:02:51.626682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.215 qpair failed and we were unable to recover it. 00:25:21.215 [2024-07-25 00:02:51.626822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.626850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.626999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.627025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.627164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.627190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.627311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.627339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.627512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.627540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.627687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.627713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.627831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.627859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.628015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.628041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.628184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.628210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.628389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.628418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.628563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.628589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.628751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.628777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.628922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.628948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.629093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.629120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.629263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.629291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.629426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.629453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.629610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.629636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.629778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.629804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.629948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.629976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.630120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.630145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.630262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.630288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.630427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.630453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.630564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.630589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.630709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.630734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.630899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.630924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.631038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.631065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.631199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.631224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.631414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.631443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.631557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.631584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.631711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.631738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.631886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.631912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.632028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.632054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.632199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.632226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.632384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.632410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.632525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.632554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.632693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.632720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.632859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.632885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.633047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.216 [2024-07-25 00:02:51.633073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.216 qpair failed and we were unable to recover it. 00:25:21.216 [2024-07-25 00:02:51.633187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.633215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.633363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.633391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.633511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.633537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.633691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.633717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.633856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.633882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.634046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.634072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.634184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.634210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.634338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.634377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.634532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.634558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.634740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.634766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.634884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.634910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.635055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.635081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.635224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.635264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.635407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.635434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.635554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.635581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.635720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.635751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.635867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.635894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.636037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.636063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.636209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.636235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.636359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.636385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.636526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.636551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.636694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.636720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.636858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.636884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.637026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.637052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.637163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.637188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.637312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.637339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.637457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.637483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.637603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.637630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.637772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.637799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.637950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.637976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.638088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.638114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.638228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.638261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.638369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.638395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.638576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.638601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.638732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.638758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.638903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.638928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.639045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.639070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.639182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.639208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.217 qpair failed and we were unable to recover it. 00:25:21.217 [2024-07-25 00:02:51.639361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.217 [2024-07-25 00:02:51.639387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.639497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.639525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.639642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.639668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.639807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.639832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.639977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.640006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.640157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.640183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.640327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.640353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.640521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.640556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.640699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.640726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.640880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.640906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.641027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.641053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.641194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.641220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.641349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.641375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.641492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.641518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.641637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.641664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.641786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.641812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.641920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.641947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.642098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.642123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.642272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.642312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.642463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.642490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.642641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.642676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.642828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.642854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.642996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.643024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.643143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.643170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.643348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.643377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.643490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.643516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.643632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.643657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.643830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.643858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.643994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.644021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.644136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.644161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.644306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.644332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.644476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.644506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.644623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.644649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.644792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.644818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.644925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.644951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.645074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.645100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.645267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.645294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.645433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.645459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.218 [2024-07-25 00:02:51.645597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.218 [2024-07-25 00:02:51.645624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.218 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.645763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.645789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.645898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.645924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.646060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.646085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.646228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.646258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.646398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.646424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.646531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.646556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.646701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.646728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.646873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.646899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.647023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.647049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.647187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.647214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.647336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.647362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.647472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.647498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.647642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.647682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.647830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.647857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.647983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.648010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.648158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.648185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.648304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.648331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.648470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.648496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.648639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.648666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.648782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.648814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.648960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.648987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.649160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.649185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.649310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.649336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.649461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.649500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.649668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.649696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.649864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.649891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.650004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.650031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.650141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.650168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.650335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.650362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.650503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.650530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.650675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.650702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.650814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.650841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.219 qpair failed and we were unable to recover it. 00:25:21.219 [2024-07-25 00:02:51.650975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.219 [2024-07-25 00:02:51.651002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.651147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.651174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.651328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.651355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.651466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.651492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.651635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.651661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.651770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.651796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.651913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.651940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.652102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.652128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.652265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.652320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.652462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.652488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.652650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.652676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.652810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.652835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.652978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.653007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.653165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.653204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.653348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.653382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.653503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.653530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.653700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.653725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.653837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.653863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.653986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.654013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.654136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.654162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.654311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.654338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.654477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.654503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.654619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.654645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.654787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.654813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.654977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.655003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.655154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.655181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.655327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.655354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.655495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.655522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.655638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.655663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.655801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.655827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.655965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.655990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.656133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.656160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.656312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.656339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.656455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.656481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.656593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.656619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.656774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.656800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.656939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.656965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.657104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.657130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.220 [2024-07-25 00:02:51.657248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.220 [2024-07-25 00:02:51.657275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.220 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.657390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.657416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.657526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.657555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.657671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.657700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.657838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.657864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.658020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.658046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.658166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.658192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.658329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.658355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.658473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.658498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.658639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.658664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.658776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.658803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.658919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.658944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.659113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.659139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.659293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.659319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.659427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.659453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.659577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.659603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.659726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.659752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.659894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.659920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.660058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.660083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.660219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.660251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.660395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.660421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.660534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.660560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.660713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.660738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.660884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.660910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.661025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.661051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.661204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.661230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.661389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.661415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.661521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.661546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.661707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.661733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.661843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.661869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.662032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.662058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.662200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.662226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.662350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.662376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.662516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.662542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.662660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.662687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.662824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.662851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.662994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.663019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.663153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.663179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.663293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.221 [2024-07-25 00:02:51.663320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.221 qpair failed and we were unable to recover it. 00:25:21.221 [2024-07-25 00:02:51.663468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.663493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.663632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.663658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.663775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.663802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.663965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.663991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.664132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.664158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.664280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.664310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.664420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.664446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.664556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.664583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.664697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.664722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.664896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.664922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.665044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.665070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.665235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.665266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.665383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.665408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.665547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.665573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.665716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.665742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.665881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.665906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.666019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.666044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.666193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.666218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.666383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.666409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.666551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.666577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.666722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.666748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.666885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.666911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.667053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.667078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.667253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.667279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.667392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.667417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.667553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.667578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.667776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.667801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.667922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.667947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.668118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.668144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.668261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.668288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.668407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.668432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.668576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.668601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.668712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.668741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.668886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.668913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.669051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.669076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.669218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.669249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.669414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.669439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.669591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.669616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.222 qpair failed and we were unable to recover it. 00:25:21.222 [2024-07-25 00:02:51.669734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.222 [2024-07-25 00:02:51.669760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.669878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.669905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.670020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.670045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.670189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.670215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.670338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.670364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.670473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.670499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.670606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.670632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.670743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.670768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.670881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.670906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.671021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.671047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.671156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.671182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.671315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.671354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.671476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.671503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.671617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.671618] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.223 [2024-07-25 00:02:51.671644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.223 [2024-07-25 00:02:51.671652] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.671666] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.223 [2024-07-25 00:02:51.671679] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.223 [2024-07-25 00:02:51.671689] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.223 [2024-07-25 00:02:51.671789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.671815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.671905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:21.223 [2024-07-25 00:02:51.671958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.671984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.671956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:21.223 [2024-07-25 00:02:51.671978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:21.223 [2024-07-25 00:02:51.671981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:21.223 [2024-07-25 00:02:51.672104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.672130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.672283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.672309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.672441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.672481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.672642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.672670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.672844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.672871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.672997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.673024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.673139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.673165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.673311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.673340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.673494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.673521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.673639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.673665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.673814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.673840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.673958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.673984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.674095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.674121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.674266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.674293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.674441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.674468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.674617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.674650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.674766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.223 [2024-07-25 00:02:51.674792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.223 qpair failed and we were unable to recover it. 00:25:21.223 [2024-07-25 00:02:51.674937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.674963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.675091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.675117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.675267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.675293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.675458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.675483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.675630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.675657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.675770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.675796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.675958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.675987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.676107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.676132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.676280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.676318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.676477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.676503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.676619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.676645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.676793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.676820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.676939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.676966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.677077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.677112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.677240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.677283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.677428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.677455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.677608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.677635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.677779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.677805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.677935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.677963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.678104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.678130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.678267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.678293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.678401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.678427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.678546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.678571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.678695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.678720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.678873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.678898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.679032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.679057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.679173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.679199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.679347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.679373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.679487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.679512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.679665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.679690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.679832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.679858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.679982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.680008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.680129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.680156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.680302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.224 [2024-07-25 00:02:51.680342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.224 qpair failed and we were unable to recover it. 00:25:21.224 [2024-07-25 00:02:51.680489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.680516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.680630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.680657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.680771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.680797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.680904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.680930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.681114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.681146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.681272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.681300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.681418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.681444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.681559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.681585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.681704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.681729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.681873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.681899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.682019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.682045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.682201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.682227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.682385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.682412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.682572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.682599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.682743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.682769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.682955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.682982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.683155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.683181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.683358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.683386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.683541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.683568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.683693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.683720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.683862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.683889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.684007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.684033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.684143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.684170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.684310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.684348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.684471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.684498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.684647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.684673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.684845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.684871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.684984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.685010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.685132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.685157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.685284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.685311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.685433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.685459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.685608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.685654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.685882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.685908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.686058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.686084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.686196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.686223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.686353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.686380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.686492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.686518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.686665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.225 [2024-07-25 00:02:51.686691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.225 qpair failed and we were unable to recover it. 00:25:21.225 [2024-07-25 00:02:51.686805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.686830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.686967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.686993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.687113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.687138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.687285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.687315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.687437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.687462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.687586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.687612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.687732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.687757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.687881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.687907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.688047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.688072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.688183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.688209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.688322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.688347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.688457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.688482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.688631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.688657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.688791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.688817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.688932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.688957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.689092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.689117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.689262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.689287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.689435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.689460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.689604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.689629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.689773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.689798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.689908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.689938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.690047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.690072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.690188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.690214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.690330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.690356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.690467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.690492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.690614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.690639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.690783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.690808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.690932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.690958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.691070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.691096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.691253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.691279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.691395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.691420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.691544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.691570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.691739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.691764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.691918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.691944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.692072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.692097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.692254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.692281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.692392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.692417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.692559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.692584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.226 qpair failed and we were unable to recover it. 00:25:21.226 [2024-07-25 00:02:51.692697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.226 [2024-07-25 00:02:51.692722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.692861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.692886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.692994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.693019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.693195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.693220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.693351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.693376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.693511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.693537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.693648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.693674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.693846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.693871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.693985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.694011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.694167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.694197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.694342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.694368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.694494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.694520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.694638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.694664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.694775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.694800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.694941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.694966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.695122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.695147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.695267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.695292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.695400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.695425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.695576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.695602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.695721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.695747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.695899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.695924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.696046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.696071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.696194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.696220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.696412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.696437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.696547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.696572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.696678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.696703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.696815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.696840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.696996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.697021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.697136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.697161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.697281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.697309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.697467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.697493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.697648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.697673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.697789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.697814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.697924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.697950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.698089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.698113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.698226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.698263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.698381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.698415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.698558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.698583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.227 [2024-07-25 00:02:51.698722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.227 [2024-07-25 00:02:51.698747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.227 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.698861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.698886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.699029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.699056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.699159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.699185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.699335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.699361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.699475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.699500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.699647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.699672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.699794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.699819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.699987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.700012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.700124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.700149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.700291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.700317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.700460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.700486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.700604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.700630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.700743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.700768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.700919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.700944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.701056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.701081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.701220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.701252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.701357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.701383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.701501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.701526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.701637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.701662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.701770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.701795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.701999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.702025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.702148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.702173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.702322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.702348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.702487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.702513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.702634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.702659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.702812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.702838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.702983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.703009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.703121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.703147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.703263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.703290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.703407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.703432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.703553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.703578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.703687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.703713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.228 [2024-07-25 00:02:51.703841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.228 [2024-07-25 00:02:51.703866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.228 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.704023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.704048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.704189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.704214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.704360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.704386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.704542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.704567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.704713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.704739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.704883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.704913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.705021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.705046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.705185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.705210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.705356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.705382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.705530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.705555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.705699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.705724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.705838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.705865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.706011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.706038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.706147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.706172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.706289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.706316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.706489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.706514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.706656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.706681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.706808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.706833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.707007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.707033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.707173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.707199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.707322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.707347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.707528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.707553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.707698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.707724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.707844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.707869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.708013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.708038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.708145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.708171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.708294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.708319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.708457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.708483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.708602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.708627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.708741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.708767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.708881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.708906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.709051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.709076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.709183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.709213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.709342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.709369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.709510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.709535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.709682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.709707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.709845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.709871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.709992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.229 [2024-07-25 00:02:51.710018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.229 qpair failed and we were unable to recover it. 00:25:21.229 [2024-07-25 00:02:51.710129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.710154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.710273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.710300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.710407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.710432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.710573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.710598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.710714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.710739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.710841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.710866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.710975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.711001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.711143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.711168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.711341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.711367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.711483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.711509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.711619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.711645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.711764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.711790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.711904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.711930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.712047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.712072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.712185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.712210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.712350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.712376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.712517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.712542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.712656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.712681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.712819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.712844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.712985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.713010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.713125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.713151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.713301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.713330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.713487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.713512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.713657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.713682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.713788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.713813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.713926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.713952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.714088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.714113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.714219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.714263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.714399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.714425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.714570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.714595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.714714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.714739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.714881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.714907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.715028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.715053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.715193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.715218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.715364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.715389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.715512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.715537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.715655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.715680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.715820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.715845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.715956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.715983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.716104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.716130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.230 [2024-07-25 00:02:51.716269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.230 [2024-07-25 00:02:51.716295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.230 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.716410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.716435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.716555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.716580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.716691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.716716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.716829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.716855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.716962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.716987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.717104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.717130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.717254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.717280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.717414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.717440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.717597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.717623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.717732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.717757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.717880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.717907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.718017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.718042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.718158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.718185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.718332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.718359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.718527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.718553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.718731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.718756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.718880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.718906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.719025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.719051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.719166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.719192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.719313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.719339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.719458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.719484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.719631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.719658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.719771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.719797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.719911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.719936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.720107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.720132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.720247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.720273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.720393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.720418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.720527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.720552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.720666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.720691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.720843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.720868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.721017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.721042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.721153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.721179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.721300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.721327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.721451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.721477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.721594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.721620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.721774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.721799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.721938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.721963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.722075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.722100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.722211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.722236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.722363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.722389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.722497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.231 [2024-07-25 00:02:51.722522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.231 qpair failed and we were unable to recover it. 00:25:21.231 [2024-07-25 00:02:51.722638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.722665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.722784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.722809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.722922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.722947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.723092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.723117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.723233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.723264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.723403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.723428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.723540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.723566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.723698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.723728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.723840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.723866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.724068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.724094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.724212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.724237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.724379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.724405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.724536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.724561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.724680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.724705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.724818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.724843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.724987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.725013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.725136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.725161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.725271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.725297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.725423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.725450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.725567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.725592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.725703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.725728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.725907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.725932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.726042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.726068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.726202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.726227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.726369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.726407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.726553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.726580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.726731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.726757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.726882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.726907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.727081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.727107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.727258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.727284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.727437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.727463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.727575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.727601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.727713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.727740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.727856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.727881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.727998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.728028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.728147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.728173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.728296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.728323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.728440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.728465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.232 [2024-07-25 00:02:51.728584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.232 [2024-07-25 00:02:51.728609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.232 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.728752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.728779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.728897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.728922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.729032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.729057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.729204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.729230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.729379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.729404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.729519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.729545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.729659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.729685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.729792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.729817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.729962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.729987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.730109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.730134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.730297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.730323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.730462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.730488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.730616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.730643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.730758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.730783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.730918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.730944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.731057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.731082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.731192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.731219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.731340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.731366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.731507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.731532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.731641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.731666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.731799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.731824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.731965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.731991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.732104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.732130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.732251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.732277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.732393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.233 [2024-07-25 00:02:51.732418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.233 qpair failed and we were unable to recover it. 00:25:21.233 [2024-07-25 00:02:51.732567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.732592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.732701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.732726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.732836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.732862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.732975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.733001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.733150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.733175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.733293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.733319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.733459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.733484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.733596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.733621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.733751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.733776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.733883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.733908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.734046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.734075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.734187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.734213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.734357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.734383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.734494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.734520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.734637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.734663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.734802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.734826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.734943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.734968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.735074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.735099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.735207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.735232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.735378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.735403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.735523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.735548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.735688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.735713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.735820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.735845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.735987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.736012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.736132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.736158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.736324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.736350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.736467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.736492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.736644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.736670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.736810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.736835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.736963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.736989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.737111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.737136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.737258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.737285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.737409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.737434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.737556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.737582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.737755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.737781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.737918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.737944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.738085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.738111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.234 qpair failed and we were unable to recover it. 00:25:21.234 [2024-07-25 00:02:51.738260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.234 [2024-07-25 00:02:51.738286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.738409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.738434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.738580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.738607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.738749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.738776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.738916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.738947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.739063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.739088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.739212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.739237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.739397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.739423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.739534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.739560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.739696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.739721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.739828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.739854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.739970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.739995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.740141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.740166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.740277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.740307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.740429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.740455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.740581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.740606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.740720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.740745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.740852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.740877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.741044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.741070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.741264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.741290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.741404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.741431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.741549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.741574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.741691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.741717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.741891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.741918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.742082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.742107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.742252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.742279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.742421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.742446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.742594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.742619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.742735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.742761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.742897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.742922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.743065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.743090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.743201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.743226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.743373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.743400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.743520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.743545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.743697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.743723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.743867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.743893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.744035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.744060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.744173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.235 [2024-07-25 00:02:51.744198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.235 qpair failed and we were unable to recover it. 00:25:21.235 [2024-07-25 00:02:51.744355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.744381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.744491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.744516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.744662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.744688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.744833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.744858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.744966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.744991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.745141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.745167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.745286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.745312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.745420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.745445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.745593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.745618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.745760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.745786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.745897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.745923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.746064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.746089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.746203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.746229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.746383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.746408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.746515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.746540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.746667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.746697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.746815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.746840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.746984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.747010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.747149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.747174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.747288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.747314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.747427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.747453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.747624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.747650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.747793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.747818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.747931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.747956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.748069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.748096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.748211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.748238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.748364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.748390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.748536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.748561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.748673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.748698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.748847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.748873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.748983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.749008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.749178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.749203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.749331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.749358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.749479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.749504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.749647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.749672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.749821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.749847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.749960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.236 [2024-07-25 00:02:51.749986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.236 qpair failed and we were unable to recover it. 00:25:21.236 [2024-07-25 00:02:51.750105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.750131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.750247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.750272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.750418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.750444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.750578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.750603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.750723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.750748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.750873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.750900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.751014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.751039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.751161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.751187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.751308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.751334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.751440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.751466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.751588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.751614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.751754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.751779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.751890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.751915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.752061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.752086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.752204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.752229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.752345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.752371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.752486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.752512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.752654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.752681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.752816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.752850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.753030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.753055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.753253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.753279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.753412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.753438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.753566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.753591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.753710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.753735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.753846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.753873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.754013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.754038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.754157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.754182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.754299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.754325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.754469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.754495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.754624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.754651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.754804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.754830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.237 qpair failed and we were unable to recover it. 00:25:21.237 [2024-07-25 00:02:51.754942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.237 [2024-07-25 00:02:51.754969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.755114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.755140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.755284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.755310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.755425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.755452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.755564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.755591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.755703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.755729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.755845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.755870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.755986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.756013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.756118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.756142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.756266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.756292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.756424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.756450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.756567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.756594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.756708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.756733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.756848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.756874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.757002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.757027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.757140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.757166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.757290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.757316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.757434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.757460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.757609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.757635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.757756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.757781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.757900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.757927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.758030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.758055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.758168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.758194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.758345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.758371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.758480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.758505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.758651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.758677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.758792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.758817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.758967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.758996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.759139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.759165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.759271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.759296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.759405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.759430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.759548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.759575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.759719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.759745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.759855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.759881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.760031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.760057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.760173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.238 [2024-07-25 00:02:51.760200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.238 qpair failed and we were unable to recover it. 00:25:21.238 [2024-07-25 00:02:51.760350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.760377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.760493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.760518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.760655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.760680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.760848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.760874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.760987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.761012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.761159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.761185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.761322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.761348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.761467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.761493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.761603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.761627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.761741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.761766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.761884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.761909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.762043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.762068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.762174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.762199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.762314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.762340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.762471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.762497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.762605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.762630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.762739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.762764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.762875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.762900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.763014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.763040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.763164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.763189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.763303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.763329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.763464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.763489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.763636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.763662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.763772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.763798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.763913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.763939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.764085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.764110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.764215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.764245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.764386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.764412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.764526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.764552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.764676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.764701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.764837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.764862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.765032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.765060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.765173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.765199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.765317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.765342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.765457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.765483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.765602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.765628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.765769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.765794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.765914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.239 [2024-07-25 00:02:51.765940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.239 qpair failed and we were unable to recover it. 00:25:21.239 [2024-07-25 00:02:51.766074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.766099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.766230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.766280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.766399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.766425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.766538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.766563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.766673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.766699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.766869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.766894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.767004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.767029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.767149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.767175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.767291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.767317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.767488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.767514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.767651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.767676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.767795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.767821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.767947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.767974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.768114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.768139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.768291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.768317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.768427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.768452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.768576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.768601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.768716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.768741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.768850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.768875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.768990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.769016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.769124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.769150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.769277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.769303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.769417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.769443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.769571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.769597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.769733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.769758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.769868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.769894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.770038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.770064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.770204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.770229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.770354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.770379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.770499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.770525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.770667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.770692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.770812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.770837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.770955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.770982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.771097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.771126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.771239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.771273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.771396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.771422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.771537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.240 [2024-07-25 00:02:51.771564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.240 qpair failed and we were unable to recover it. 00:25:21.240 [2024-07-25 00:02:51.771709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-25 00:02:51.771736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-25 00:02:51.771876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-25 00:02:51.771901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-25 00:02:51.772018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-25 00:02:51.772044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-25 00:02:51.772163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-25 00:02:51.772188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-25 00:02:51.772386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-25 00:02:51.772412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-25 00:02:51.772531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-25 00:02:51.772556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.241 [2024-07-25 00:02:51.772700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.241 [2024-07-25 00:02:51.772726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.241 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.772845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.772872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.772984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.773009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.773153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.773180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.773330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.773355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.773476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.773501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.773615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.773642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.773756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.773783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.773898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.773923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.774040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.774065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.774180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.774207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.774361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.774388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.774498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.774523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.774637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.774664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.774771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.774796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.774920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.774945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.775058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.775084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.775195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.775221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.775355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.775396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.775537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.775566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.775689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.775715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.775858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.775884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.776012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.776038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.776183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.776209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.776335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.776362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.776474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.776500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.776618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.776643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.776758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.776785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.776901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.776927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.777041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.777068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.777180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.777205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.777369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.777395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.777513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.777538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.777655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.777682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.777826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.504 [2024-07-25 00:02:51.777852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.504 qpair failed and we were unable to recover it. 00:25:21.504 [2024-07-25 00:02:51.777964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.777990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.778110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.778135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.778254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.778280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.778428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.778454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.778590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.778615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.778790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.778815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.778936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.778961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.779108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.779136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.779253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.779279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.779424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.779450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.779555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.779580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.779728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.779754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.779867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.779893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.780039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.780066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.780239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.780270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.780384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.780409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.780551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.780576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.780721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.780748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.780887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.780913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.781024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.781051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.781228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.781259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.781373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.781399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.781519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.781548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.781667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.781692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.781812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.781839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f14000b90 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.781956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.781984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.782155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.782181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.782298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.782324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.782491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.782516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.782661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.782686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.782826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.782851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.782956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.782981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.783126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.783152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.783291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.783317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.783427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.783453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.783595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.783620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.783795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.783820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.783927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.505 [2024-07-25 00:02:51.783953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.505 qpair failed and we were unable to recover it. 00:25:21.505 [2024-07-25 00:02:51.784077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.784102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.784222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.784254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.784378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.784404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.784551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.784576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.784723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.784749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.784915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.784940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.785087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.785112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.785225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.785255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.785364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.785390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.785503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.785528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.785647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.785672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.785824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.785849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.786003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.786028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.786143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.786168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.786303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.786329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.786441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.786466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.786602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.786627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.786747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.786772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.786895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.786921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.787030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.787055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.787169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.787195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.787321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.787347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.787454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.787479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.787594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.787619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.787767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.787794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.787915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.787940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.788075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.788100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.788213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.788238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.788359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.788386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.788530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.788556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.788680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.788705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.788827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.788852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.789001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.789026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.789171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.789196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.789312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.789338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.789444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.789469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.789592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.789617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.789731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.789756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.506 qpair failed and we were unable to recover it. 00:25:21.506 [2024-07-25 00:02:51.789903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.506 [2024-07-25 00:02:51.789929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.790078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.790104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.790217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.790254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.790392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.790417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.790531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.790557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.790672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.790697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.790840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.790865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.790983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.791008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.791131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.791155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.791275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.791301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.791431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.791457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.791567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.791592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.791702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.791727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.791897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.791923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.792067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.792096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.792210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.792235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.792357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.792382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.792495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.792520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.792626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.792652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.792767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.792792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.792936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.792961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.793069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.793094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.793252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.793278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.793401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.793426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.793541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.793566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.793674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.793700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.793809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.793834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.793949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.793974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.794096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.794121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.794267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.794293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.794400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.794425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.794542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.794569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.794677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.794703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.794814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.794839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.794981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.795006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.795147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.795172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.795280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.795306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.795457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.795483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.507 [2024-07-25 00:02:51.795595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.507 [2024-07-25 00:02:51.795620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.507 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.795765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.795790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.795903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.795928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.796046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.796075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.796193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.796218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.796357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.796382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.796498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.796524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.796688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.796714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.796823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.796849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.796986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.797011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.797148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.797173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.797285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.797311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.797457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.797482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.797625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.797650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.797771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.797797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.797922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.797947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.798068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.798092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.798210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.798236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.798391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.798416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.798567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.798592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.798743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.798769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.798885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.798910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.799024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.799049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.799163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.799188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.799303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.799330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.799498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.799523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.799638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.799664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.799805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.799831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.799939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.799964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.800104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.800130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.800237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.800266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.508 qpair failed and we were unable to recover it. 00:25:21.508 [2024-07-25 00:02:51.800382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.508 [2024-07-25 00:02:51.800407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.800559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.800585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.800700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.800725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.800843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.800868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.801011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.801036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.801149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.801175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.801289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.801315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.801453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.801479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.801600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.801626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.801740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.801765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.801885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.801911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.802055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.802080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.802199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.802224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.802383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.802423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.802582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.802609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.802761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.802788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.802914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.802941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.803081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.803107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.803219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.803254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.803407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.803434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.803554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.803580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.803701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.803727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.803838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.803864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.803992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.804018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.804156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.804182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.804301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.804328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.804445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.804476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.804603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.804629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.804737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.804763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.804878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.804905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.805010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.805038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.805206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.805232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.805360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.805386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.805516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.805542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.805686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.805712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.805832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.805858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.805971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.805998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.806105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.806131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.509 qpair failed and we were unable to recover it. 00:25:21.509 [2024-07-25 00:02:51.806277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.509 [2024-07-25 00:02:51.806303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.806424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.806449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.806567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.806594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.806737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.806764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.806920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.806945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.807065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.807092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.807208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.807236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.807389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.807415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.807552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.807578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.807717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.807743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.807865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.807891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.808070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.808095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.808207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.808232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.808358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.808384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.808499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.808526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8f0c000b90 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.808658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.808690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.808831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.808856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.808969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.808995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.809144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.809170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.809282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.809308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.809456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.809482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.809595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.809622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.809740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.809766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.809879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.809904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.810023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.810047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.810214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.810239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.810394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.810419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.810542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.810567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.810704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.810729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.810870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.810896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.811008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.811033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.811175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.811200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.811314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.811341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.811456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.811481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.811623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.811649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.811791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.811816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.811919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.811944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.812086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.812111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.510 qpair failed and we were unable to recover it. 00:25:21.510 [2024-07-25 00:02:51.812227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.510 [2024-07-25 00:02:51.812261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.812407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.812432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.812544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.812570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.812694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.812719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.812861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.812890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.813010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.813036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.813175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.813200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.813329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.813355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.813496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.813521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.813672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.813697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.813802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.813828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.813965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.813990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.814105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.814131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.814237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.814270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.814379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.814404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.814545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.814571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.814683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.814708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.814849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.814874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.814997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.815023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.815136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.815161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.815272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.815298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.815439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.815465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.815573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.815598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.815717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.815742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.815882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.815907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.816017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.816042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.816190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.816215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.816372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.816397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.816524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.816549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.816668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.816693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.816839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.816865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.816972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.817004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.817143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.817168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.817281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.817307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.817448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.817473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.817585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.817610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.817727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.817752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.817878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.817904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.511 [2024-07-25 00:02:51.818015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.511 [2024-07-25 00:02:51.818042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.511 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.818162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.818187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.818302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.818327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.818447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.818472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.818586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.818611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.818716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.818741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.818852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.818878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.818998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.819024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.819168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.819194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.819303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.819329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.819447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.819472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.819615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.819640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.819791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.819816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.819926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.819951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.820065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.820091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.820230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.820261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.820397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.820423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.820542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.820568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.820708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.820734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.820886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.820911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.821023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.821053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.821159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.821185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.821298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.821323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.821465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.821491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.821605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.821630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.821740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.821766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.821895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.821919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 [2024-07-25 00:02:51.822042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.822067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2300250 with addr=10.0.0.2, port=4420 00:25:21.512 qpair failed and we were unable to recover it. 00:25:21.512 A controller has encountered a failure and is being reset. 00:25:21.512 [2024-07-25 00:02:51.822270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:21.512 [2024-07-25 00:02:51.822318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x230e230 with addr=10.0.0.2, port=4420 00:25:21.512 [2024-07-25 00:02:51.822339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230e230 is same with the state(5) to be set 00:25:21.512 [2024-07-25 00:02:51.822367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x230e230 (9): Bad file descriptor 00:25:21.512 [2024-07-25 00:02:51.822386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:21.512 [2024-07-25 00:02:51.822402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:21.512 [2024-07-25 00:02:51.822419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:21.512 Unable to reset the controller. 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.078 Malloc0 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.078 [2024-07-25 00:02:52.466825] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.078 [2024-07-25 00:02:52.495107] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:22.078 00:02:52 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3481702 00:25:22.643 Controller properly reset. 00:25:27.920 Initializing NVMe Controllers 00:25:27.920 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:27.920 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:27.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:27.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:27.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:27.920 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:27.920 Initialization complete. Launching workers. 00:25:27.920 Starting thread on core 1 00:25:27.920 Starting thread on core 2 00:25:27.920 Starting thread on core 3 00:25:27.920 Starting thread on core 0 00:25:27.920 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:27.920 00:25:27.920 real 0m10.813s 00:25:27.920 user 0m33.022s 00:25:27.920 sys 0m7.716s 00:25:27.920 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:27.920 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:27.920 ************************************ 00:25:27.920 END TEST nvmf_target_disconnect_tc2 00:25:27.920 ************************************ 00:25:27.920 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:27.921 rmmod nvme_tcp 00:25:27.921 rmmod nvme_fabrics 00:25:27.921 rmmod nvme_keyring 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 3482189 ']' 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 3482189 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 3482189 ']' 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 3482189 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3482189 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3482189' 00:25:27.921 killing process with pid 3482189 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 3482189 00:25:27.921 00:02:57 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 3482189 00:25:27.921 00:02:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:27.921 00:02:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:27.921 00:02:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:27.921 00:02:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:27.921 00:02:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:27.921 00:02:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.921 00:02:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:27.921 00:02:58 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:29.821 00:03:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:29.821 00:25:29.821 real 0m15.546s 00:25:29.821 user 0m58.503s 00:25:29.821 sys 0m10.184s 00:25:29.821 00:03:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:29.821 00:03:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:29.821 ************************************ 00:25:29.821 END TEST nvmf_target_disconnect 00:25:29.821 ************************************ 00:25:29.821 00:03:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:29.821 00:25:29.821 real 5m3.335s 00:25:29.821 user 11m2.845s 00:25:29.821 sys 1m13.073s 00:25:29.821 00:03:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:29.821 00:03:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.821 ************************************ 00:25:29.821 END TEST nvmf_host 00:25:29.821 ************************************ 00:25:29.821 00:25:29.821 real 19m32.232s 00:25:29.821 user 46m31.475s 00:25:29.821 sys 4m49.587s 00:25:29.822 00:03:00 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:29.822 00:03:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.822 ************************************ 00:25:29.822 END TEST nvmf_tcp 00:25:29.822 ************************************ 00:25:29.822 00:03:00 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:25:29.822 00:03:00 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:29.822 00:03:00 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:29.822 00:03:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:29.822 00:03:00 -- common/autotest_common.sh@10 -- # set +x 00:25:29.822 ************************************ 00:25:29.822 START TEST spdkcli_nvmf_tcp 00:25:29.822 ************************************ 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:29.822 * Looking for test storage... 00:25:29.822 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3483342 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3483342 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 3483342 ']' 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:29.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:29.822 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:29.822 [2024-07-25 00:03:00.373767] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:25:29.822 [2024-07-25 00:03:00.373851] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3483342 ] 00:25:29.822 EAL: No free 2048 kB hugepages reported on node 1 00:25:30.080 [2024-07-25 00:03:00.434425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:30.080 [2024-07-25 00:03:00.554266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:30.080 [2024-07-25 00:03:00.554269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.080 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:30.080 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:25:30.080 00:03:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:30.080 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:30.081 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.338 00:03:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:30.338 00:03:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:30.338 00:03:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:30.338 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:30.338 00:03:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:30.338 00:03:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:30.338 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:30.338 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:30.338 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:30.338 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:30.338 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:30.338 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:30.338 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:30.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:30.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:30.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:30.338 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:30.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:30.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:30.338 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:30.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:30.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:30.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:30.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:30.338 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:30.339 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:30.339 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:30.339 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:30.339 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:30.339 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:30.339 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:30.339 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:30.339 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:30.339 ' 00:25:32.866 [2024-07-25 00:03:03.232130] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:34.237 [2024-07-25 00:03:04.452475] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:36.135 [2024-07-25 00:03:06.711703] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:38.660 [2024-07-25 00:03:08.653896] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:39.593 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:39.593 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:39.593 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:39.593 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:39.593 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:39.593 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:39.593 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:39.593 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:39.593 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:39.593 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:39.593 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:39.593 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:39.851 00:03:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:39.851 00:03:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:39.851 00:03:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:39.851 00:03:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:39.851 00:03:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:39.851 00:03:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:39.851 00:03:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:39.851 00:03:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:40.107 00:03:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:40.107 00:03:10 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:40.107 00:03:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:40.107 00:03:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:40.107 00:03:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.364 00:03:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:40.364 00:03:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:40.364 00:03:10 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:40.364 00:03:10 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:40.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:40.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:40.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:40.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:40.364 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:40.364 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:40.364 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:40.364 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:40.364 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:40.364 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:40.364 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:40.364 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:40.364 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:40.364 ' 00:25:45.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:45.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:45.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:45.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:45.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:45.623 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:45.623 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:45.623 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:45.623 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:45.623 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:45.623 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:45.623 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:45.623 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:45.623 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:45.623 00:03:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:45.623 00:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:45.623 00:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.623 00:03:15 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3483342 00:25:45.623 00:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3483342 ']' 00:25:45.623 00:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3483342 00:25:45.623 00:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:25:45.623 00:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:45.623 00:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3483342 00:25:45.623 00:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:45.623 00:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:45.623 00:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3483342' 00:25:45.623 killing process with pid 3483342 00:25:45.623 00:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 3483342 00:25:45.623 00:03:15 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 3483342 00:25:45.902 00:03:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:45.902 00:03:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:45.902 00:03:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3483342 ']' 00:25:45.902 00:03:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3483342 00:25:45.902 00:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 3483342 ']' 00:25:45.902 00:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 3483342 00:25:45.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3483342) - No such process 00:25:45.902 00:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 3483342 is not found' 00:25:45.902 Process with pid 3483342 is not found 00:25:45.902 00:03:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:45.902 00:03:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:45.902 00:03:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:45.902 00:25:45.902 real 0m15.981s 00:25:45.902 user 0m33.670s 00:25:45.902 sys 0m0.788s 00:25:45.902 00:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:45.902 00:03:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:45.902 ************************************ 00:25:45.902 END TEST spdkcli_nvmf_tcp 00:25:45.902 ************************************ 00:25:45.902 00:03:16 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:45.902 00:03:16 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:45.902 00:03:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:45.902 00:03:16 -- common/autotest_common.sh@10 -- # set +x 00:25:45.902 ************************************ 00:25:45.902 START TEST nvmf_identify_passthru 00:25:45.902 ************************************ 00:25:45.902 00:03:16 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:45.902 * Looking for test storage... 00:25:45.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:45.902 00:03:16 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:45.902 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.902 00:03:16 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.902 00:03:16 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.902 00:03:16 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.902 00:03:16 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.903 00:03:16 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.903 00:03:16 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.903 00:03:16 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:45.903 00:03:16 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:45.903 00:03:16 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:45.903 00:03:16 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:45.903 00:03:16 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:45.903 00:03:16 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:45.903 00:03:16 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.903 00:03:16 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.903 00:03:16 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.903 00:03:16 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:45.903 00:03:16 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:45.903 00:03:16 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.903 00:03:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:45.903 00:03:16 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:45.903 00:03:16 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:45.903 00:03:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:47.814 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:47.814 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.814 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:47.815 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:47.815 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:47.815 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:48.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:48.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:25:48.073 00:25:48.073 --- 10.0.0.2 ping statistics --- 00:25:48.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.073 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:48.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:48.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:25:48.073 00:25:48.073 --- 10.0.0.1 ping statistics --- 00:25:48.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:48.073 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:48.073 00:03:18 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:48.073 00:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:48.073 00:03:18 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:48.073 00:03:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:48.073 00:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:48.073 00:03:18 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=() 00:25:48.073 00:03:18 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # local bdfs 00:25:48.073 00:03:18 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # bdfs=($(get_nvme_bdfs)) 00:25:48.073 00:03:18 nvmf_identify_passthru -- common/autotest_common.sh@1523 -- # get_nvme_bdfs 00:25:48.073 00:03:18 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=() 00:25:48.073 00:03:18 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # local bdfs 00:25:48.073 00:03:18 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:48.073 00:03:18 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:48.073 00:03:18 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # jq -r '.config[].params.traddr' 00:25:48.073 00:03:18 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # (( 1 == 0 )) 00:25:48.073 00:03:18 nvmf_identify_passthru -- common/autotest_common.sh@1517 -- # printf '%s\n' 0000:88:00.0 00:25:48.073 00:03:18 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # echo 0000:88:00.0 00:25:48.073 00:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:25:48.073 00:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:25:48.073 00:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:48.073 00:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:48.073 00:03:18 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:48.073 EAL: No free 2048 kB hugepages reported on node 1 00:25:52.256 00:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:25:52.256 00:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:52.256 00:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:52.256 00:03:22 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:52.514 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.731 00:03:27 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:56.731 00:03:27 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:56.731 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.731 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:56.731 00:03:27 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:56.731 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:56.731 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:56.731 00:03:27 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3488552 00:25:56.731 00:03:27 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:56.731 00:03:27 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:56.731 00:03:27 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3488552 00:25:56.731 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 3488552 ']' 00:25:56.731 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:56.731 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:56.731 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:56.731 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:56.731 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:56.731 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:56.732 [2024-07-25 00:03:27.167927] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:25:56.732 [2024-07-25 00:03:27.168024] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:56.732 EAL: No free 2048 kB hugepages reported on node 1 00:25:56.732 [2024-07-25 00:03:27.246097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:56.990 [2024-07-25 00:03:27.379923] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:56.990 [2024-07-25 00:03:27.379983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:56.990 [2024-07-25 00:03:27.380022] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:56.990 [2024-07-25 00:03:27.380042] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:56.990 [2024-07-25 00:03:27.380060] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:56.990 [2024-07-25 00:03:27.380153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.990 [2024-07-25 00:03:27.380313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:56.990 [2024-07-25 00:03:27.380463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:56.990 [2024-07-25 00:03:27.380474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:25:56.990 00:03:27 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:56.990 INFO: Log level set to 20 00:25:56.990 INFO: Requests: 00:25:56.990 { 00:25:56.990 "jsonrpc": "2.0", 00:25:56.990 "method": "nvmf_set_config", 00:25:56.990 "id": 1, 00:25:56.990 "params": { 00:25:56.990 "admin_cmd_passthru": { 00:25:56.990 "identify_ctrlr": true 00:25:56.990 } 00:25:56.990 } 00:25:56.990 } 00:25:56.990 00:25:56.990 INFO: response: 00:25:56.990 { 00:25:56.990 "jsonrpc": "2.0", 00:25:56.990 "id": 1, 00:25:56.990 "result": true 00:25:56.990 } 00:25:56.990 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.990 00:03:27 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:56.990 INFO: Setting log level to 20 00:25:56.990 INFO: Setting log level to 20 00:25:56.990 INFO: Log level set to 20 00:25:56.990 INFO: Log level set to 20 00:25:56.990 INFO: Requests: 00:25:56.990 { 00:25:56.990 "jsonrpc": "2.0", 00:25:56.990 "method": "framework_start_init", 00:25:56.990 "id": 1 00:25:56.990 } 00:25:56.990 00:25:56.990 INFO: Requests: 00:25:56.990 { 00:25:56.990 "jsonrpc": "2.0", 00:25:56.990 "method": "framework_start_init", 00:25:56.990 "id": 1 00:25:56.990 } 00:25:56.990 00:25:56.990 [2024-07-25 00:03:27.551489] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:56.990 INFO: response: 00:25:56.990 { 00:25:56.990 "jsonrpc": "2.0", 00:25:56.990 "id": 1, 00:25:56.990 "result": true 00:25:56.990 } 00:25:56.990 00:25:56.990 INFO: response: 00:25:56.990 { 00:25:56.990 "jsonrpc": "2.0", 00:25:56.990 "id": 1, 00:25:56.990 "result": true 00:25:56.990 } 00:25:56.990 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.990 00:03:27 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:56.990 INFO: Setting log level to 40 00:25:56.990 INFO: Setting log level to 40 00:25:56.990 INFO: Setting log level to 40 00:25:56.990 [2024-07-25 00:03:27.561518] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.990 00:03:27 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:56.990 00:03:27 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.990 00:03:27 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:00.267 Nvme0n1 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:00.267 [2024-07-25 00:03:30.449774] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:00.267 [ 00:26:00.267 { 00:26:00.267 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:00.267 "subtype": "Discovery", 00:26:00.267 "listen_addresses": [], 00:26:00.267 "allow_any_host": true, 00:26:00.267 "hosts": [] 00:26:00.267 }, 00:26:00.267 { 00:26:00.267 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:00.267 "subtype": "NVMe", 00:26:00.267 "listen_addresses": [ 00:26:00.267 { 00:26:00.267 "trtype": "TCP", 00:26:00.267 "adrfam": "IPv4", 00:26:00.267 "traddr": "10.0.0.2", 00:26:00.267 "trsvcid": "4420" 00:26:00.267 } 00:26:00.267 ], 00:26:00.267 "allow_any_host": true, 00:26:00.267 "hosts": [], 00:26:00.267 "serial_number": "SPDK00000000000001", 00:26:00.267 "model_number": "SPDK bdev Controller", 00:26:00.267 "max_namespaces": 1, 00:26:00.267 "min_cntlid": 1, 00:26:00.267 "max_cntlid": 65519, 00:26:00.267 "namespaces": [ 00:26:00.267 { 00:26:00.267 "nsid": 1, 00:26:00.267 "bdev_name": "Nvme0n1", 00:26:00.267 "name": "Nvme0n1", 00:26:00.267 "nguid": "DE995061F4754DD59866D1D3EE1474B1", 00:26:00.267 "uuid": "de995061-f475-4dd5-9866-d1d3ee1474b1" 00:26:00.267 } 00:26:00.267 ] 00:26:00.267 } 00:26:00.267 ] 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:00.267 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:00.267 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:00.267 00:03:30 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:00.267 00:03:30 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:00.267 00:03:30 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:26:00.267 00:03:30 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:00.267 00:03:30 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:26:00.267 00:03:30 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:00.267 00:03:30 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:00.267 rmmod nvme_tcp 00:26:00.267 rmmod nvme_fabrics 00:26:00.267 rmmod nvme_keyring 00:26:00.267 00:03:30 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:00.267 00:03:30 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:26:00.267 00:03:30 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:26:00.267 00:03:30 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 3488552 ']' 00:26:00.267 00:03:30 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 3488552 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 3488552 ']' 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 3488552 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3488552 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3488552' 00:26:00.267 killing process with pid 3488552 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 3488552 00:26:00.267 00:03:30 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 3488552 00:26:02.165 00:03:32 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:02.165 00:03:32 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:02.165 00:03:32 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:02.165 00:03:32 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:02.165 00:03:32 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:02.165 00:03:32 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.165 00:03:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:02.165 00:03:32 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.066 00:03:34 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:04.066 00:26:04.066 real 0m18.193s 00:26:04.066 user 0m26.779s 00:26:04.066 sys 0m2.328s 00:26:04.066 00:03:34 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:04.066 00:03:34 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:26:04.066 ************************************ 00:26:04.066 END TEST nvmf_identify_passthru 00:26:04.066 ************************************ 00:26:04.066 00:03:34 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:04.066 00:03:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:04.066 00:03:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:04.066 00:03:34 -- common/autotest_common.sh@10 -- # set +x 00:26:04.066 ************************************ 00:26:04.066 START TEST nvmf_dif 00:26:04.066 ************************************ 00:26:04.066 00:03:34 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:04.066 * Looking for test storage... 00:26:04.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:04.066 00:03:34 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:04.066 00:03:34 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:04.066 00:03:34 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:04.066 00:03:34 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:04.066 00:03:34 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.066 00:03:34 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.066 00:03:34 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.066 00:03:34 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:26:04.066 00:03:34 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:04.066 00:03:34 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:26:04.066 00:03:34 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:04.066 00:03:34 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:04.066 00:03:34 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:26:04.066 00:03:34 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.066 00:03:34 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:04.066 00:03:34 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:04.066 00:03:34 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:26:04.066 00:03:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:05.964 00:03:36 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:05.964 00:03:36 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:26:05.964 00:03:36 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:05.964 00:03:36 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:05.964 00:03:36 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:05.964 00:03:36 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:05.965 00:03:36 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:06.223 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:06.223 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:06.223 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:06.223 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:06.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:26:06.223 00:26:06.223 --- 10.0.0.2 ping statistics --- 00:26:06.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.223 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:06.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:26:06.223 00:26:06.223 --- 10.0.0.1 ping statistics --- 00:26:06.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.223 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:06.223 00:03:36 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:07.595 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:07.595 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:07.595 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:07.595 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:07.595 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:07.595 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:07.595 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:07.595 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:07.595 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:07.595 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:26:07.595 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:26:07.595 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:26:07.595 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:26:07.596 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:26:07.596 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:26:07.596 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:26:07.596 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:26:07.596 00:03:37 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.596 00:03:37 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:07.596 00:03:37 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:07.596 00:03:37 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.596 00:03:37 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:07.596 00:03:37 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:07.596 00:03:37 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:07.596 00:03:37 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:26:07.596 00:03:37 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:07.596 00:03:37 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:07.596 00:03:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:07.596 00:03:37 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=3491696 00:26:07.596 00:03:37 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:07.596 00:03:37 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 3491696 00:26:07.596 00:03:37 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 3491696 ']' 00:26:07.596 00:03:37 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.596 00:03:37 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:07.596 00:03:37 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.596 00:03:37 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:07.596 00:03:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:07.596 [2024-07-25 00:03:38.041837] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:26:07.596 [2024-07-25 00:03:38.041914] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.596 EAL: No free 2048 kB hugepages reported on node 1 00:26:07.596 [2024-07-25 00:03:38.111397] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.853 [2024-07-25 00:03:38.227844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.853 [2024-07-25 00:03:38.227898] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.853 [2024-07-25 00:03:38.227915] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.853 [2024-07-25 00:03:38.227928] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.853 [2024-07-25 00:03:38.227939] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.853 [2024-07-25 00:03:38.227970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.418 00:03:38 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:08.418 00:03:38 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:26:08.418 00:03:38 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:08.418 00:03:38 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:08.418 00:03:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:08.418 00:03:38 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.418 00:03:38 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:26:08.418 00:03:38 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:26:08.418 00:03:38 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.418 00:03:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:08.418 [2024-07-25 00:03:38.999307] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.418 00:03:39 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.418 00:03:39 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:26:08.418 00:03:39 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:08.418 00:03:39 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:08.418 00:03:39 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:08.418 ************************************ 00:26:08.418 START TEST fio_dif_1_default 00:26:08.418 ************************************ 00:26:08.418 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:26:08.418 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:26:08.418 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:26:08.418 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:26:08.418 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:26:08.418 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:26:08.418 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:08.418 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.418 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:08.676 bdev_null0 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:08.676 [2024-07-25 00:03:39.055559] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:08.676 { 00:26:08.676 "params": { 00:26:08.676 "name": "Nvme$subsystem", 00:26:08.676 "trtype": "$TEST_TRANSPORT", 00:26:08.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:08.676 "adrfam": "ipv4", 00:26:08.676 "trsvcid": "$NVMF_PORT", 00:26:08.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:08.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:08.676 "hdgst": ${hdgst:-false}, 00:26:08.676 "ddgst": ${ddgst:-false} 00:26:08.676 }, 00:26:08.676 "method": "bdev_nvme_attach_controller" 00:26:08.676 } 00:26:08.676 EOF 00:26:08.676 )") 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # shift 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libasan 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:08.676 "params": { 00:26:08.676 "name": "Nvme0", 00:26:08.676 "trtype": "tcp", 00:26:08.676 "traddr": "10.0.0.2", 00:26:08.676 "adrfam": "ipv4", 00:26:08.676 "trsvcid": "4420", 00:26:08.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:08.676 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:08.676 "hdgst": false, 00:26:08.676 "ddgst": false 00:26:08.676 }, 00:26:08.676 "method": "bdev_nvme_attach_controller" 00:26:08.676 }' 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:08.676 00:03:39 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:08.934 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:08.934 fio-3.35 00:26:08.934 Starting 1 thread 00:26:08.934 EAL: No free 2048 kB hugepages reported on node 1 00:26:21.151 00:26:21.151 filename0: (groupid=0, jobs=1): err= 0: pid=3492050: Thu Jul 25 00:03:49 2024 00:26:21.151 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10014msec) 00:26:21.151 slat (nsec): min=6815, max=67349, avg=8966.61, stdev=3805.55 00:26:21.151 clat (usec): min=40813, max=46545, avg=41011.45, stdev=376.14 00:26:21.151 lat (usec): min=40820, max=46582, avg=41020.42, stdev=376.56 00:26:21.151 clat percentiles (usec): 00:26:21.151 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:21.151 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:21.151 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:21.151 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:26:21.151 | 99.99th=[46400] 00:26:21.151 bw ( KiB/s): min= 384, max= 416, per=99.52%, avg=388.80, stdev=11.72, samples=20 00:26:21.151 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:21.151 lat (msec) : 50=100.00% 00:26:21.151 cpu : usr=89.65%, sys=10.08%, ctx=14, majf=0, minf=250 00:26:21.151 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:21.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:21.151 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:21.151 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:21.151 00:26:21.151 Run status group 0 (all jobs): 00:26:21.151 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10014-10014msec 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.151 00:26:21.151 real 0m11.139s 00:26:21.151 user 0m10.101s 00:26:21.151 sys 0m1.266s 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:21.151 ************************************ 00:26:21.151 END TEST fio_dif_1_default 00:26:21.151 ************************************ 00:26:21.151 00:03:50 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:21.151 00:03:50 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:21.151 00:03:50 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.151 00:03:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:21.151 ************************************ 00:26:21.151 START TEST fio_dif_1_multi_subsystems 00:26:21.151 ************************************ 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:21.151 bdev_null0 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:21.151 [2024-07-25 00:03:50.238502] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:21.151 bdev_null1 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:21.151 { 00:26:21.151 "params": { 00:26:21.151 "name": "Nvme$subsystem", 00:26:21.151 "trtype": "$TEST_TRANSPORT", 00:26:21.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.151 "adrfam": "ipv4", 00:26:21.151 "trsvcid": "$NVMF_PORT", 00:26:21.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.151 "hdgst": ${hdgst:-false}, 00:26:21.151 "ddgst": ${ddgst:-false} 00:26:21.151 }, 00:26:21.151 "method": "bdev_nvme_attach_controller" 00:26:21.151 } 00:26:21.151 EOF 00:26:21.151 )") 00:26:21.151 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # shift 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libasan 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:21.152 { 00:26:21.152 "params": { 00:26:21.152 "name": "Nvme$subsystem", 00:26:21.152 "trtype": "$TEST_TRANSPORT", 00:26:21.152 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:21.152 "adrfam": "ipv4", 00:26:21.152 "trsvcid": "$NVMF_PORT", 00:26:21.152 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:21.152 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:21.152 "hdgst": ${hdgst:-false}, 00:26:21.152 "ddgst": ${ddgst:-false} 00:26:21.152 }, 00:26:21.152 "method": "bdev_nvme_attach_controller" 00:26:21.152 } 00:26:21.152 EOF 00:26:21.152 )") 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:21.152 "params": { 00:26:21.152 "name": "Nvme0", 00:26:21.152 "trtype": "tcp", 00:26:21.152 "traddr": "10.0.0.2", 00:26:21.152 "adrfam": "ipv4", 00:26:21.152 "trsvcid": "4420", 00:26:21.152 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:21.152 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:21.152 "hdgst": false, 00:26:21.152 "ddgst": false 00:26:21.152 }, 00:26:21.152 "method": "bdev_nvme_attach_controller" 00:26:21.152 },{ 00:26:21.152 "params": { 00:26:21.152 "name": "Nvme1", 00:26:21.152 "trtype": "tcp", 00:26:21.152 "traddr": "10.0.0.2", 00:26:21.152 "adrfam": "ipv4", 00:26:21.152 "trsvcid": "4420", 00:26:21.152 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:21.152 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:21.152 "hdgst": false, 00:26:21.152 "ddgst": false 00:26:21.152 }, 00:26:21.152 "method": "bdev_nvme_attach_controller" 00:26:21.152 }' 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:21.152 00:03:50 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:21.152 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:21.152 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:21.152 fio-3.35 00:26:21.152 Starting 2 threads 00:26:21.152 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.114 00:26:31.114 filename0: (groupid=0, jobs=1): err= 0: pid=3493461: Thu Jul 25 00:04:01 2024 00:26:31.114 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:26:31.114 slat (nsec): min=6850, max=28655, avg=9626.75, stdev=2640.17 00:26:31.114 clat (usec): min=40872, max=46410, avg=40996.09, stdev=353.61 00:26:31.114 lat (usec): min=40880, max=46427, avg=41005.71, stdev=353.78 00:26:31.114 clat percentiles (usec): 00:26:31.114 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:31.114 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:31.114 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:31.114 | 99.00th=[41157], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:26:31.114 | 99.99th=[46400] 00:26:31.114 bw ( KiB/s): min= 384, max= 416, per=49.75%, avg=388.80, stdev=11.72, samples=20 00:26:31.114 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:31.114 lat (msec) : 50=100.00% 00:26:31.114 cpu : usr=94.34%, sys=5.26%, ctx=52, majf=0, minf=67 00:26:31.114 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:31.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.114 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.114 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:31.114 filename1: (groupid=0, jobs=1): err= 0: pid=3493462: Thu Jul 25 00:04:01 2024 00:26:31.114 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10010msec) 00:26:31.114 slat (nsec): min=7787, max=34665, avg=9375.23, stdev=2481.66 00:26:31.114 clat (usec): min=40885, max=46362, avg=40992.87, stdev=344.33 00:26:31.114 lat (usec): min=40892, max=46378, avg=41002.25, stdev=344.50 00:26:31.114 clat percentiles (usec): 00:26:31.114 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:31.114 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:31.114 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:31.114 | 99.00th=[41157], 99.50th=[41157], 99.90th=[46400], 99.95th=[46400], 00:26:31.114 | 99.99th=[46400] 00:26:31.114 bw ( KiB/s): min= 384, max= 416, per=49.75%, avg=388.80, stdev=11.72, samples=20 00:26:31.114 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:26:31.114 lat (msec) : 50=100.00% 00:26:31.114 cpu : usr=94.43%, sys=5.31%, ctx=13, majf=0, minf=190 00:26:31.114 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:31.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:31.114 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:31.114 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:31.114 00:26:31.114 Run status group 0 (all jobs): 00:26:31.114 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10010-10011msec 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.114 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:31.115 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.115 00:26:31.115 real 0m11.476s 00:26:31.115 user 0m20.435s 00:26:31.115 sys 0m1.330s 00:26:31.115 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:31.115 00:04:01 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:31.115 ************************************ 00:26:31.115 END TEST fio_dif_1_multi_subsystems 00:26:31.115 ************************************ 00:26:31.115 00:04:01 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:31.115 00:04:01 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:31.115 00:04:01 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:31.115 00:04:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:31.372 ************************************ 00:26:31.372 START TEST fio_dif_rand_params 00:26:31.372 ************************************ 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:31.372 bdev_null0 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:31.372 [2024-07-25 00:04:01.757116] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:31.372 { 00:26:31.372 "params": { 00:26:31.372 "name": "Nvme$subsystem", 00:26:31.372 "trtype": "$TEST_TRANSPORT", 00:26:31.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:31.372 "adrfam": "ipv4", 00:26:31.372 "trsvcid": "$NVMF_PORT", 00:26:31.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:31.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:31.372 "hdgst": ${hdgst:-false}, 00:26:31.372 "ddgst": ${ddgst:-false} 00:26:31.372 }, 00:26:31.372 "method": "bdev_nvme_attach_controller" 00:26:31.372 } 00:26:31.372 EOF 00:26:31.372 )") 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:31.372 00:04:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:31.373 "params": { 00:26:31.373 "name": "Nvme0", 00:26:31.373 "trtype": "tcp", 00:26:31.373 "traddr": "10.0.0.2", 00:26:31.373 "adrfam": "ipv4", 00:26:31.373 "trsvcid": "4420", 00:26:31.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:31.373 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:31.373 "hdgst": false, 00:26:31.373 "ddgst": false 00:26:31.373 }, 00:26:31.373 "method": "bdev_nvme_attach_controller" 00:26:31.373 }' 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:31.373 00:04:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:31.630 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:31.630 ... 00:26:31.630 fio-3.35 00:26:31.630 Starting 3 threads 00:26:31.630 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.181 00:26:38.181 filename0: (groupid=0, jobs=1): err= 0: pid=3494861: Thu Jul 25 00:04:07 2024 00:26:38.181 read: IOPS=206, BW=25.8MiB/s (27.0MB/s)(130MiB/5046msec) 00:26:38.181 slat (nsec): min=7616, max=40930, avg=12570.91, stdev=2470.85 00:26:38.181 clat (usec): min=5035, max=56997, avg=14499.17, stdev=11223.84 00:26:38.181 lat (usec): min=5047, max=57023, avg=14511.74, stdev=11223.82 00:26:38.181 clat percentiles (usec): 00:26:38.181 | 1.00th=[ 5407], 5.00th=[ 6456], 10.00th=[ 7963], 20.00th=[ 8848], 00:26:38.181 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11863], 60.00th=[12518], 00:26:38.181 | 70.00th=[13435], 80.00th=[14484], 90.00th=[16319], 95.00th=[50070], 00:26:38.181 | 99.00th=[54789], 99.50th=[55313], 99.90th=[56886], 99.95th=[56886], 00:26:38.181 | 99.99th=[56886] 00:26:38.181 bw ( KiB/s): min=19968, max=36864, per=33.20%, avg=26547.20, stdev=5386.22, samples=10 00:26:38.181 iops : min= 156, max= 288, avg=207.40, stdev=42.08, samples=10 00:26:38.181 lat (msec) : 10=30.29%, 20=61.73%, 50=2.69%, 100=5.29% 00:26:38.181 cpu : usr=91.67%, sys=7.87%, ctx=11, majf=0, minf=78 00:26:38.181 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.181 issued rwts: total=1040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.181 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:38.181 filename0: (groupid=0, jobs=1): err= 0: pid=3494862: Thu Jul 25 00:04:07 2024 00:26:38.181 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(135MiB/5045msec) 00:26:38.181 slat (nsec): min=7552, max=45017, avg=12871.45, stdev=2697.04 00:26:38.181 clat (usec): min=5874, max=89754, avg=14010.25, stdev=10856.36 00:26:38.181 lat (usec): min=5886, max=89767, avg=14023.12, stdev=10856.25 00:26:38.181 clat percentiles (usec): 00:26:38.181 | 1.00th=[ 6194], 5.00th=[ 7570], 10.00th=[ 8160], 20.00th=[ 8979], 00:26:38.181 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[11207], 60.00th=[11863], 00:26:38.181 | 70.00th=[12649], 80.00th=[13566], 90.00th=[15664], 95.00th=[49021], 00:26:38.181 | 99.00th=[52691], 99.50th=[53740], 99.90th=[55313], 99.95th=[89654], 00:26:38.181 | 99.99th=[89654] 00:26:38.181 bw ( KiB/s): min=20224, max=34304, per=34.35%, avg=27468.80, stdev=4499.34, samples=10 00:26:38.181 iops : min= 158, max= 268, avg=214.60, stdev=35.15, samples=10 00:26:38.181 lat (msec) : 10=33.83%, 20=58.55%, 50=3.90%, 100=3.72% 00:26:38.181 cpu : usr=91.99%, sys=7.55%, ctx=10, majf=0, minf=54 00:26:38.181 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.181 issued rwts: total=1076,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.181 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:38.181 filename0: (groupid=0, jobs=1): err= 0: pid=3494863: Thu Jul 25 00:04:07 2024 00:26:38.181 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(130MiB/5044msec) 00:26:38.181 slat (nsec): min=7586, max=77715, avg=12757.13, stdev=3043.59 00:26:38.181 clat (usec): min=4780, max=91036, avg=14545.94, stdev=12228.32 00:26:38.181 lat (usec): min=4792, max=91048, avg=14558.70, stdev=12228.44 00:26:38.181 clat percentiles (usec): 00:26:38.181 | 1.00th=[ 5407], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 8717], 00:26:38.181 | 30.00th=[ 9634], 40.00th=[10552], 50.00th=[11600], 60.00th=[12780], 00:26:38.181 | 70.00th=[13829], 80.00th=[15008], 90.00th=[17171], 95.00th=[51643], 00:26:38.181 | 99.00th=[56361], 99.50th=[58983], 99.90th=[90702], 99.95th=[90702], 00:26:38.181 | 99.99th=[90702] 00:26:38.181 bw ( KiB/s): min=17955, max=35840, per=33.11%, avg=26473.90, stdev=7077.67, samples=10 00:26:38.181 iops : min= 140, max= 280, avg=206.80, stdev=55.33, samples=10 00:26:38.181 lat (msec) : 10=34.75%, 20=57.72%, 50=1.74%, 100=5.79% 00:26:38.181 cpu : usr=92.68%, sys=6.86%, ctx=13, majf=0, minf=191 00:26:38.181 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:38.181 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.181 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:38.181 issued rwts: total=1036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:38.181 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:38.181 00:26:38.181 Run status group 0 (all jobs): 00:26:38.181 READ: bw=78.1MiB/s (81.9MB/s), 25.7MiB/s-26.7MiB/s (26.9MB/s-28.0MB/s), io=394MiB (413MB), run=5044-5046msec 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 bdev_null0 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 [2024-07-25 00:04:08.065393] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 bdev_null1 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 bdev_null2 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.182 { 00:26:38.182 "params": { 00:26:38.182 "name": "Nvme$subsystem", 00:26:38.182 "trtype": "$TEST_TRANSPORT", 00:26:38.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.182 "adrfam": "ipv4", 00:26:38.182 "trsvcid": "$NVMF_PORT", 00:26:38.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.182 "hdgst": ${hdgst:-false}, 00:26:38.182 "ddgst": ${ddgst:-false} 00:26:38.182 }, 00:26:38.182 "method": "bdev_nvme_attach_controller" 00:26:38.182 } 00:26:38.182 EOF 00:26:38.182 )") 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.182 00:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.182 { 00:26:38.182 "params": { 00:26:38.182 "name": "Nvme$subsystem", 00:26:38.183 "trtype": "$TEST_TRANSPORT", 00:26:38.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.183 "adrfam": "ipv4", 00:26:38.183 "trsvcid": "$NVMF_PORT", 00:26:38.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.183 "hdgst": ${hdgst:-false}, 00:26:38.183 "ddgst": ${ddgst:-false} 00:26:38.183 }, 00:26:38.183 "method": "bdev_nvme_attach_controller" 00:26:38.183 } 00:26:38.183 EOF 00:26:38.183 )") 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:38.183 { 00:26:38.183 "params": { 00:26:38.183 "name": "Nvme$subsystem", 00:26:38.183 "trtype": "$TEST_TRANSPORT", 00:26:38.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.183 "adrfam": "ipv4", 00:26:38.183 "trsvcid": "$NVMF_PORT", 00:26:38.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.183 "hdgst": ${hdgst:-false}, 00:26:38.183 "ddgst": ${ddgst:-false} 00:26:38.183 }, 00:26:38.183 "method": "bdev_nvme_attach_controller" 00:26:38.183 } 00:26:38.183 EOF 00:26:38.183 )") 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:38.183 "params": { 00:26:38.183 "name": "Nvme0", 00:26:38.183 "trtype": "tcp", 00:26:38.183 "traddr": "10.0.0.2", 00:26:38.183 "adrfam": "ipv4", 00:26:38.183 "trsvcid": "4420", 00:26:38.183 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:38.183 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:38.183 "hdgst": false, 00:26:38.183 "ddgst": false 00:26:38.183 }, 00:26:38.183 "method": "bdev_nvme_attach_controller" 00:26:38.183 },{ 00:26:38.183 "params": { 00:26:38.183 "name": "Nvme1", 00:26:38.183 "trtype": "tcp", 00:26:38.183 "traddr": "10.0.0.2", 00:26:38.183 "adrfam": "ipv4", 00:26:38.183 "trsvcid": "4420", 00:26:38.183 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:38.183 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:38.183 "hdgst": false, 00:26:38.183 "ddgst": false 00:26:38.183 }, 00:26:38.183 "method": "bdev_nvme_attach_controller" 00:26:38.183 },{ 00:26:38.183 "params": { 00:26:38.183 "name": "Nvme2", 00:26:38.183 "trtype": "tcp", 00:26:38.183 "traddr": "10.0.0.2", 00:26:38.183 "adrfam": "ipv4", 00:26:38.183 "trsvcid": "4420", 00:26:38.183 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:38.183 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:38.183 "hdgst": false, 00:26:38.183 "ddgst": false 00:26:38.183 }, 00:26:38.183 "method": "bdev_nvme_attach_controller" 00:26:38.183 }' 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:38.183 00:04:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:38.183 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:38.183 ... 00:26:38.183 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:38.183 ... 00:26:38.183 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:38.183 ... 00:26:38.183 fio-3.35 00:26:38.183 Starting 24 threads 00:26:38.183 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.380 00:26:50.380 filename0: (groupid=0, jobs=1): err= 0: pid=3495726: Thu Jul 25 00:04:19 2024 00:26:50.380 read: IOPS=61, BW=246KiB/s (252kB/s)(2488KiB/10095msec) 00:26:50.380 slat (usec): min=13, max=100, avg=59.14, stdev=20.47 00:26:50.380 clat (msec): min=157, max=382, avg=259.08, stdev=41.61 00:26:50.380 lat (msec): min=157, max=382, avg=259.13, stdev=41.62 00:26:50.380 clat percentiles (msec): 00:26:50.380 | 1.00th=[ 159], 5.00th=[ 171], 10.00th=[ 188], 20.00th=[ 239], 00:26:50.380 | 30.00th=[ 251], 40.00th=[ 255], 50.00th=[ 255], 60.00th=[ 275], 00:26:50.380 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 338], 00:26:50.380 | 99.00th=[ 363], 99.50th=[ 380], 99.90th=[ 384], 99.95th=[ 384], 00:26:50.380 | 99.99th=[ 384] 00:26:50.380 bw ( KiB/s): min= 128, max= 384, per=3.69%, avg=242.40, stdev=55.73, samples=20 00:26:50.380 iops : min= 32, max= 96, avg=60.60, stdev=13.93, samples=20 00:26:50.380 lat (msec) : 250=28.78%, 500=71.22% 00:26:50.380 cpu : usr=97.90%, sys=1.62%, ctx=27, majf=0, minf=9 00:26:50.380 IO depths : 1=3.2%, 2=9.5%, 4=25.1%, 8=53.1%, 16=9.2%, 32=0.0%, >=64=0.0% 00:26:50.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.380 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.380 issued rwts: total=622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.380 filename0: (groupid=0, jobs=1): err= 0: pid=3495727: Thu Jul 25 00:04:19 2024 00:26:50.380 read: IOPS=87, BW=348KiB/s (356kB/s)(3512KiB/10091msec) 00:26:50.380 slat (nsec): min=7825, max=72111, avg=17792.79, stdev=11983.85 00:26:50.380 clat (msec): min=139, max=242, avg=183.54, stdev=19.47 00:26:50.380 lat (msec): min=139, max=243, avg=183.56, stdev=19.47 00:26:50.380 clat percentiles (msec): 00:26:50.380 | 1.00th=[ 140], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 169], 00:26:50.380 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 188], 00:26:50.380 | 70.00th=[ 192], 80.00th=[ 192], 90.00th=[ 201], 95.00th=[ 236], 00:26:50.380 | 99.00th=[ 243], 99.50th=[ 243], 99.90th=[ 243], 99.95th=[ 243], 00:26:50.380 | 99.99th=[ 243] 00:26:50.380 bw ( KiB/s): min= 256, max= 384, per=5.27%, avg=345.60, stdev=55.28, samples=20 00:26:50.380 iops : min= 64, max= 96, avg=86.40, stdev=13.82, samples=20 00:26:50.380 lat (msec) : 250=100.00% 00:26:50.380 cpu : usr=98.06%, sys=1.55%, ctx=24, majf=0, minf=9 00:26:50.380 IO depths : 1=0.9%, 2=7.2%, 4=25.1%, 8=55.4%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:50.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.380 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.380 issued rwts: total=878,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.380 filename0: (groupid=0, jobs=1): err= 0: pid=3495728: Thu Jul 25 00:04:19 2024 00:26:50.380 read: IOPS=64, BW=260KiB/s (266kB/s)(2624KiB/10111msec) 00:26:50.380 slat (usec): min=4, max=229, avg=66.50, stdev=32.64 00:26:50.380 clat (msec): min=9, max=382, avg=245.42, stdev=63.86 00:26:50.380 lat (msec): min=9, max=382, avg=245.49, stdev=63.87 00:26:50.380 clat percentiles (msec): 00:26:50.380 | 1.00th=[ 10], 5.00th=[ 70], 10.00th=[ 186], 20.00th=[ 239], 00:26:50.380 | 30.00th=[ 247], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 275], 00:26:50.380 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 288], 95.00th=[ 300], 00:26:50.380 | 99.00th=[ 338], 99.50th=[ 347], 99.90th=[ 384], 99.95th=[ 384], 00:26:50.380 | 99.99th=[ 384] 00:26:50.380 bw ( KiB/s): min= 128, max= 512, per=3.89%, avg=256.00, stdev=70.61, samples=20 00:26:50.380 iops : min= 32, max= 128, avg=64.00, stdev=17.65, samples=20 00:26:50.380 lat (msec) : 10=2.13%, 50=0.61%, 100=4.57%, 250=27.74%, 500=64.94% 00:26:50.380 cpu : usr=96.96%, sys=1.95%, ctx=124, majf=0, minf=9 00:26:50.380 IO depths : 1=1.1%, 2=7.3%, 4=25.0%, 8=55.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:26:50.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.380 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.380 issued rwts: total=656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.380 filename0: (groupid=0, jobs=1): err= 0: pid=3495729: Thu Jul 25 00:04:19 2024 00:26:50.380 read: IOPS=58, BW=235KiB/s (240kB/s)(2368KiB/10092msec) 00:26:50.380 slat (nsec): min=12613, max=98262, avg=48668.51, stdev=23316.10 00:26:50.380 clat (msec): min=188, max=371, avg=271.57, stdev=28.93 00:26:50.380 lat (msec): min=188, max=371, avg=271.62, stdev=28.93 00:26:50.380 clat percentiles (msec): 00:26:50.380 | 1.00th=[ 190], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 251], 00:26:50.380 | 30.00th=[ 255], 40.00th=[ 259], 50.00th=[ 275], 60.00th=[ 284], 00:26:50.380 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 300], 95.00th=[ 334], 00:26:50.380 | 99.00th=[ 351], 99.50th=[ 351], 99.90th=[ 372], 99.95th=[ 372], 00:26:50.380 | 99.99th=[ 372] 00:26:50.380 bw ( KiB/s): min= 128, max= 256, per=3.51%, avg=230.40, stdev=50.70, samples=20 00:26:50.380 iops : min= 32, max= 64, avg=57.60, stdev=12.68, samples=20 00:26:50.380 lat (msec) : 250=18.41%, 500=81.59% 00:26:50.380 cpu : usr=98.21%, sys=1.39%, ctx=11, majf=0, minf=9 00:26:50.380 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:26:50.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.380 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.380 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.380 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.380 filename0: (groupid=0, jobs=1): err= 0: pid=3495730: Thu Jul 25 00:04:19 2024 00:26:50.380 read: IOPS=61, BW=247KiB/s (252kB/s)(2488KiB/10092msec) 00:26:50.380 slat (nsec): min=8676, max=78893, avg=24651.19, stdev=12268.09 00:26:50.380 clat (msec): min=142, max=398, avg=259.20, stdev=43.68 00:26:50.380 lat (msec): min=142, max=398, avg=259.23, stdev=43.68 00:26:50.380 clat percentiles (msec): 00:26:50.380 | 1.00th=[ 155], 5.00th=[ 178], 10.00th=[ 186], 20.00th=[ 239], 00:26:50.380 | 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 257], 60.00th=[ 279], 00:26:50.380 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 347], 00:26:50.380 | 99.00th=[ 376], 99.50th=[ 380], 99.90th=[ 401], 99.95th=[ 401], 00:26:50.380 | 99.99th=[ 401] 00:26:50.380 bw ( KiB/s): min= 128, max= 384, per=3.69%, avg=242.40, stdev=67.74, samples=20 00:26:50.380 iops : min= 32, max= 96, avg=60.60, stdev=16.93, samples=20 00:26:50.380 lat (msec) : 250=28.94%, 500=71.06% 00:26:50.380 cpu : usr=97.22%, sys=1.82%, ctx=92, majf=0, minf=9 00:26:50.380 IO depths : 1=2.9%, 2=9.2%, 4=25.1%, 8=53.4%, 16=9.5%, 32=0.0%, >=64=0.0% 00:26:50.380 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.381 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.381 issued rwts: total=622,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.381 filename0: (groupid=0, jobs=1): err= 0: pid=3495731: Thu Jul 25 00:04:19 2024 00:26:50.381 read: IOPS=58, BW=235KiB/s (240kB/s)(2368KiB/10087msec) 00:26:50.381 slat (nsec): min=11962, max=94026, avg=25766.36, stdev=15699.43 00:26:50.381 clat (msec): min=166, max=459, avg=272.34, stdev=36.82 00:26:50.381 lat (msec): min=166, max=459, avg=272.37, stdev=36.82 00:26:50.381 clat percentiles (msec): 00:26:50.381 | 1.00th=[ 171], 5.00th=[ 239], 10.00th=[ 239], 20.00th=[ 247], 00:26:50.381 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 275], 60.00th=[ 288], 00:26:50.381 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 313], 95.00th=[ 359], 00:26:50.381 | 99.00th=[ 372], 99.50th=[ 430], 99.90th=[ 460], 99.95th=[ 460], 00:26:50.381 | 99.99th=[ 460] 00:26:50.381 bw ( KiB/s): min= 128, max= 272, per=3.51%, avg=230.40, stdev=51.23, samples=20 00:26:50.381 iops : min= 32, max= 68, avg=57.60, stdev=12.81, samples=20 00:26:50.381 lat (msec) : 250=23.99%, 500=76.01% 00:26:50.381 cpu : usr=96.48%, sys=2.26%, ctx=215, majf=0, minf=9 00:26:50.381 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.8%, 32=0.0%, >=64=0.0% 00:26:50.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.381 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.381 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.381 filename0: (groupid=0, jobs=1): err= 0: pid=3495732: Thu Jul 25 00:04:19 2024 00:26:50.381 read: IOPS=58, BW=236KiB/s (241kB/s)(2368KiB/10054msec) 00:26:50.381 slat (nsec): min=6060, max=83063, avg=25009.83, stdev=9461.17 00:26:50.381 clat (msec): min=183, max=374, avg=271.50, stdev=32.64 00:26:50.381 lat (msec): min=183, max=374, avg=271.53, stdev=32.64 00:26:50.381 clat percentiles (msec): 00:26:50.381 | 1.00th=[ 184], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 251], 00:26:50.381 | 30.00th=[ 255], 40.00th=[ 259], 50.00th=[ 275], 60.00th=[ 284], 00:26:50.381 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 334], 00:26:50.381 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:26:50.381 | 99.99th=[ 376] 00:26:50.381 bw ( KiB/s): min= 128, max= 256, per=3.51%, avg=230.40, stdev=50.70, samples=20 00:26:50.381 iops : min= 32, max= 64, avg=57.60, stdev=12.68, samples=20 00:26:50.381 lat (msec) : 250=17.57%, 500=82.43% 00:26:50.381 cpu : usr=98.11%, sys=1.46%, ctx=25, majf=0, minf=9 00:26:50.381 IO depths : 1=5.1%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.4%, 32=0.0%, >=64=0.0% 00:26:50.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.381 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.381 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.381 filename0: (groupid=0, jobs=1): err= 0: pid=3495733: Thu Jul 25 00:04:19 2024 00:26:50.381 read: IOPS=58, BW=235KiB/s (241kB/s)(2368KiB/10077msec) 00:26:50.381 slat (nsec): min=10294, max=94481, avg=60479.62, stdev=18930.63 00:26:50.381 clat (msec): min=236, max=365, avg=271.80, stdev=26.97 00:26:50.381 lat (msec): min=236, max=365, avg=271.86, stdev=26.97 00:26:50.381 clat percentiles (msec): 00:26:50.381 | 1.00th=[ 236], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 249], 00:26:50.381 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 275], 60.00th=[ 288], 00:26:50.381 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 296], 95.00th=[ 321], 00:26:50.381 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:26:50.381 | 99.99th=[ 368] 00:26:50.381 bw ( KiB/s): min= 128, max= 256, per=3.51%, avg=230.40, stdev=52.53, samples=20 00:26:50.381 iops : min= 32, max= 64, avg=57.60, stdev=13.13, samples=20 00:26:50.381 lat (msec) : 250=21.96%, 500=78.04% 00:26:50.381 cpu : usr=97.37%, sys=1.76%, ctx=284, majf=0, minf=9 00:26:50.381 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:50.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.381 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.381 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.381 filename1: (groupid=0, jobs=1): err= 0: pid=3495734: Thu Jul 25 00:04:19 2024 00:26:50.381 read: IOPS=82, BW=330KiB/s (338kB/s)(3328KiB/10092msec) 00:26:50.381 slat (usec): min=7, max=100, avg=16.18, stdev=12.11 00:26:50.381 clat (msec): min=136, max=346, avg=193.94, stdev=33.22 00:26:50.381 lat (msec): min=136, max=346, avg=193.96, stdev=33.22 00:26:50.381 clat percentiles (msec): 00:26:50.381 | 1.00th=[ 142], 5.00th=[ 155], 10.00th=[ 167], 20.00th=[ 171], 00:26:50.381 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:26:50.381 | 70.00th=[ 194], 80.00th=[ 213], 90.00th=[ 251], 95.00th=[ 266], 00:26:50.381 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 347], 99.95th=[ 347], 00:26:50.381 | 99.99th=[ 347] 00:26:50.381 bw ( KiB/s): min= 256, max= 384, per=4.98%, avg=326.40, stdev=54.30, samples=20 00:26:50.381 iops : min= 64, max= 96, avg=81.60, stdev=13.57, samples=20 00:26:50.381 lat (msec) : 250=89.90%, 500=10.10% 00:26:50.381 cpu : usr=98.18%, sys=1.43%, ctx=20, majf=0, minf=9 00:26:50.381 IO depths : 1=1.7%, 2=4.7%, 4=14.9%, 8=67.7%, 16=11.1%, 32=0.0%, >=64=0.0% 00:26:50.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.381 complete : 0=0.0%, 4=91.2%, 8=3.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.381 issued rwts: total=832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.381 filename1: (groupid=0, jobs=1): err= 0: pid=3495735: Thu Jul 25 00:04:19 2024 00:26:50.381 read: IOPS=58, BW=235KiB/s (241kB/s)(2368KiB/10056msec) 00:26:50.381 slat (usec): min=5, max=105, avg=68.01, stdev=13.86 00:26:50.381 clat (msec): min=185, max=371, avg=271.17, stdev=25.67 00:26:50.381 lat (msec): min=185, max=371, avg=271.24, stdev=25.67 00:26:50.381 clat percentiles (msec): 00:26:50.381 | 1.00th=[ 190], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 251], 00:26:50.381 | 30.00th=[ 255], 40.00th=[ 259], 50.00th=[ 275], 60.00th=[ 284], 00:26:50.381 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 326], 00:26:50.381 | 99.00th=[ 351], 99.50th=[ 355], 99.90th=[ 372], 99.95th=[ 372], 00:26:50.381 | 99.99th=[ 372] 00:26:50.381 bw ( KiB/s): min= 128, max= 256, per=3.51%, avg=230.40, stdev=52.53, samples=20 00:26:50.381 iops : min= 32, max= 64, avg=57.60, stdev=13.13, samples=20 00:26:50.381 lat (msec) : 250=19.26%, 500=80.74% 00:26:50.381 cpu : usr=98.06%, sys=1.47%, ctx=29, majf=0, minf=9 00:26:50.381 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:26:50.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.381 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.381 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.381 filename1: (groupid=0, jobs=1): err= 0: pid=3495736: Thu Jul 25 00:04:19 2024 00:26:50.381 read: IOPS=58, BW=235KiB/s (241kB/s)(2368KiB/10079msec) 00:26:50.381 slat (usec): min=26, max=186, avg=74.18, stdev=18.02 00:26:50.381 clat (msec): min=118, max=403, avg=271.76, stdev=38.34 00:26:50.381 lat (msec): min=118, max=403, avg=271.83, stdev=38.33 00:26:50.381 clat percentiles (msec): 00:26:50.381 | 1.00th=[ 165], 5.00th=[ 236], 10.00th=[ 239], 20.00th=[ 247], 00:26:50.381 | 30.00th=[ 253], 40.00th=[ 253], 50.00th=[ 275], 60.00th=[ 288], 00:26:50.381 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 313], 95.00th=[ 359], 00:26:50.381 | 99.00th=[ 372], 99.50th=[ 401], 99.90th=[ 405], 99.95th=[ 405], 00:26:50.381 | 99.99th=[ 405] 00:26:50.381 bw ( KiB/s): min= 128, max= 256, per=3.51%, avg=230.40, stdev=50.70, samples=20 00:26:50.381 iops : min= 32, max= 64, avg=57.60, stdev=12.68, samples=20 00:26:50.381 lat (msec) : 250=25.51%, 500=74.49% 00:26:50.381 cpu : usr=96.47%, sys=2.22%, ctx=116, majf=0, minf=9 00:26:50.381 IO depths : 1=4.2%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:26:50.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.381 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.381 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.381 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.381 filename1: (groupid=0, jobs=1): err= 0: pid=3495737: Thu Jul 25 00:04:19 2024 00:26:50.381 read: IOPS=87, BW=349KiB/s (357kB/s)(3520KiB/10091msec) 00:26:50.381 slat (nsec): min=7878, max=60632, avg=17547.15, stdev=9728.13 00:26:50.381 clat (msec): min=139, max=242, avg=183.30, stdev=19.70 00:26:50.381 lat (msec): min=139, max=243, avg=183.32, stdev=19.70 00:26:50.381 clat percentiles (msec): 00:26:50.381 | 1.00th=[ 140], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 169], 00:26:50.381 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 188], 00:26:50.381 | 70.00th=[ 192], 80.00th=[ 192], 90.00th=[ 201], 95.00th=[ 239], 00:26:50.381 | 99.00th=[ 243], 99.50th=[ 243], 99.90th=[ 243], 99.95th=[ 243], 00:26:50.381 | 99.99th=[ 243] 00:26:50.381 bw ( KiB/s): min= 256, max= 384, per=5.27%, avg=345.60, stdev=60.18, samples=20 00:26:50.381 iops : min= 64, max= 96, avg=86.40, stdev=15.05, samples=20 00:26:50.381 lat (msec) : 250=100.00% 00:26:50.382 cpu : usr=97.85%, sys=1.81%, ctx=20, majf=0, minf=9 00:26:50.382 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:50.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.382 filename1: (groupid=0, jobs=1): err= 0: pid=3495738: Thu Jul 25 00:04:19 2024 00:26:50.382 read: IOPS=90, BW=364KiB/s (373kB/s)(3680KiB/10115msec) 00:26:50.382 slat (nsec): min=5989, max=34087, avg=10792.63, stdev=3539.32 00:26:50.382 clat (msec): min=40, max=274, avg=175.21, stdev=34.98 00:26:50.382 lat (msec): min=40, max=274, avg=175.22, stdev=34.98 00:26:50.382 clat percentiles (msec): 00:26:50.382 | 1.00th=[ 41], 5.00th=[ 102], 10.00th=[ 142], 20.00th=[ 165], 00:26:50.382 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 184], 60.00th=[ 186], 00:26:50.382 | 70.00th=[ 188], 80.00th=[ 190], 90.00th=[ 201], 95.00th=[ 218], 00:26:50.382 | 99.00th=[ 271], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:26:50.382 | 99.99th=[ 275] 00:26:50.382 bw ( KiB/s): min= 256, max= 640, per=5.51%, avg=361.60, stdev=80.65, samples=20 00:26:50.382 iops : min= 64, max= 160, avg=90.40, stdev=20.16, samples=20 00:26:50.382 lat (msec) : 50=1.74%, 100=1.96%, 250=94.57%, 500=1.74% 00:26:50.382 cpu : usr=97.50%, sys=1.88%, ctx=19, majf=0, minf=9 00:26:50.382 IO depths : 1=0.7%, 2=3.4%, 4=14.2%, 8=69.8%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:50.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 complete : 0=0.0%, 4=91.1%, 8=3.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 issued rwts: total=920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.382 filename1: (groupid=0, jobs=1): err= 0: pid=3495739: Thu Jul 25 00:04:19 2024 00:26:50.382 read: IOPS=86, BW=344KiB/s (353kB/s)(3480KiB/10108msec) 00:26:50.382 slat (nsec): min=7260, max=56926, avg=15970.23, stdev=9934.89 00:26:50.382 clat (msec): min=131, max=243, avg=184.79, stdev=18.51 00:26:50.382 lat (msec): min=131, max=243, avg=184.81, stdev=18.51 00:26:50.382 clat percentiles (msec): 00:26:50.382 | 1.00th=[ 159], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:26:50.382 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 190], 00:26:50.382 | 70.00th=[ 192], 80.00th=[ 194], 90.00th=[ 205], 95.00th=[ 220], 00:26:50.382 | 99.00th=[ 243], 99.50th=[ 243], 99.90th=[ 243], 99.95th=[ 243], 00:26:50.382 | 99.99th=[ 243] 00:26:50.382 bw ( KiB/s): min= 256, max= 384, per=5.21%, avg=341.60, stdev=56.69, samples=20 00:26:50.382 iops : min= 64, max= 96, avg=85.40, stdev=14.17, samples=20 00:26:50.382 lat (msec) : 250=100.00% 00:26:50.382 cpu : usr=97.95%, sys=1.68%, ctx=18, majf=0, minf=9 00:26:50.382 IO depths : 1=5.2%, 2=10.7%, 4=22.8%, 8=54.0%, 16=7.4%, 32=0.0%, >=64=0.0% 00:26:50.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 issued rwts: total=870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.382 filename1: (groupid=0, jobs=1): err= 0: pid=3495740: Thu Jul 25 00:04:19 2024 00:26:50.382 read: IOPS=58, BW=235KiB/s (240kB/s)(2368KiB/10087msec) 00:26:50.382 slat (nsec): min=3969, max=87511, avg=21289.34, stdev=7557.23 00:26:50.382 clat (msec): min=117, max=374, avg=272.40, stdev=29.70 00:26:50.382 lat (msec): min=117, max=374, avg=272.42, stdev=29.70 00:26:50.382 clat percentiles (msec): 00:26:50.382 | 1.00th=[ 239], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 249], 00:26:50.382 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 275], 60.00th=[ 288], 00:26:50.382 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 300], 95.00th=[ 321], 00:26:50.382 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:26:50.382 | 99.99th=[ 376] 00:26:50.382 bw ( KiB/s): min= 128, max= 272, per=3.51%, avg=230.40, stdev=52.79, samples=20 00:26:50.382 iops : min= 32, max= 68, avg=57.60, stdev=13.20, samples=20 00:26:50.382 lat (msec) : 250=21.28%, 500=78.72% 00:26:50.382 cpu : usr=96.31%, sys=2.48%, ctx=70, majf=0, minf=9 00:26:50.382 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:50.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.382 filename1: (groupid=0, jobs=1): err= 0: pid=3495741: Thu Jul 25 00:04:19 2024 00:26:50.382 read: IOPS=58, BW=234KiB/s (240kB/s)(2360KiB/10079msec) 00:26:50.382 slat (nsec): min=9489, max=98512, avg=32584.54, stdev=20717.41 00:26:50.382 clat (msec): min=157, max=452, avg=272.97, stdev=46.37 00:26:50.382 lat (msec): min=157, max=452, avg=273.01, stdev=46.37 00:26:50.382 clat percentiles (msec): 00:26:50.382 | 1.00th=[ 159], 5.00th=[ 205], 10.00th=[ 243], 20.00th=[ 251], 00:26:50.382 | 30.00th=[ 255], 40.00th=[ 257], 50.00th=[ 266], 60.00th=[ 284], 00:26:50.382 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 300], 95.00th=[ 363], 00:26:50.382 | 99.00th=[ 451], 99.50th=[ 451], 99.90th=[ 451], 99.95th=[ 451], 00:26:50.382 | 99.99th=[ 451] 00:26:50.382 bw ( KiB/s): min= 128, max= 256, per=3.50%, avg=229.60, stdev=50.40, samples=20 00:26:50.382 iops : min= 32, max= 64, avg=57.40, stdev=12.60, samples=20 00:26:50.382 lat (msec) : 250=15.59%, 500=84.41% 00:26:50.382 cpu : usr=97.10%, sys=1.83%, ctx=44, majf=0, minf=9 00:26:50.382 IO depths : 1=4.7%, 2=11.0%, 4=25.1%, 8=51.5%, 16=7.6%, 32=0.0%, >=64=0.0% 00:26:50.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 issued rwts: total=590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.382 filename2: (groupid=0, jobs=1): err= 0: pid=3495742: Thu Jul 25 00:04:19 2024 00:26:50.382 read: IOPS=58, BW=235KiB/s (240kB/s)(2368KiB/10087msec) 00:26:50.382 slat (nsec): min=5954, max=98522, avg=24132.74, stdev=14250.01 00:26:50.382 clat (msec): min=117, max=371, avg=272.36, stdev=33.42 00:26:50.382 lat (msec): min=117, max=371, avg=272.38, stdev=33.42 00:26:50.382 clat percentiles (msec): 00:26:50.382 | 1.00th=[ 180], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 249], 00:26:50.382 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 275], 60.00th=[ 288], 00:26:50.382 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 300], 95.00th=[ 338], 00:26:50.382 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 372], 99.95th=[ 372], 00:26:50.382 | 99.99th=[ 372] 00:26:50.382 bw ( KiB/s): min= 128, max= 272, per=3.51%, avg=230.40, stdev=49.08, samples=20 00:26:50.382 iops : min= 32, max= 68, avg=57.60, stdev=12.27, samples=20 00:26:50.382 lat (msec) : 250=21.62%, 500=78.38% 00:26:50.382 cpu : usr=95.60%, sys=2.57%, ctx=288, majf=0, minf=9 00:26:50.382 IO depths : 1=2.0%, 2=8.3%, 4=25.0%, 8=54.2%, 16=10.5%, 32=0.0%, >=64=0.0% 00:26:50.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.382 filename2: (groupid=0, jobs=1): err= 0: pid=3495743: Thu Jul 25 00:04:19 2024 00:26:50.382 read: IOPS=58, BW=236KiB/s (241kB/s)(2368KiB/10053msec) 00:26:50.382 slat (nsec): min=16323, max=93764, avg=25893.39, stdev=8520.46 00:26:50.382 clat (msec): min=182, max=399, avg=271.48, stdev=34.84 00:26:50.382 lat (msec): min=182, max=399, avg=271.51, stdev=34.84 00:26:50.382 clat percentiles (msec): 00:26:50.382 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 241], 20.00th=[ 251], 00:26:50.382 | 30.00th=[ 255], 40.00th=[ 259], 50.00th=[ 275], 60.00th=[ 284], 00:26:50.382 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 300], 95.00th=[ 342], 00:26:50.382 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 401], 99.95th=[ 401], 00:26:50.382 | 99.99th=[ 401] 00:26:50.382 bw ( KiB/s): min= 128, max= 272, per=3.51%, avg=230.40, stdev=50.97, samples=20 00:26:50.382 iops : min= 32, max= 68, avg=57.60, stdev=12.74, samples=20 00:26:50.382 lat (msec) : 250=17.57%, 500=82.43% 00:26:50.382 cpu : usr=98.05%, sys=1.42%, ctx=57, majf=0, minf=9 00:26:50.382 IO depths : 1=4.6%, 2=10.8%, 4=25.0%, 8=51.7%, 16=7.9%, 32=0.0%, >=64=0.0% 00:26:50.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.382 filename2: (groupid=0, jobs=1): err= 0: pid=3495744: Thu Jul 25 00:04:19 2024 00:26:50.382 read: IOPS=86, BW=345KiB/s (353kB/s)(3480KiB/10091msec) 00:26:50.382 slat (nsec): min=8145, max=97704, avg=18684.15, stdev=12856.75 00:26:50.382 clat (msec): min=139, max=313, avg=184.87, stdev=22.92 00:26:50.382 lat (msec): min=139, max=313, avg=184.89, stdev=22.93 00:26:50.382 clat percentiles (msec): 00:26:50.382 | 1.00th=[ 140], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 169], 00:26:50.382 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 188], 00:26:50.382 | 70.00th=[ 192], 80.00th=[ 192], 90.00th=[ 203], 95.00th=[ 239], 00:26:50.382 | 99.00th=[ 268], 99.50th=[ 309], 99.90th=[ 313], 99.95th=[ 313], 00:26:50.382 | 99.99th=[ 313] 00:26:50.382 bw ( KiB/s): min= 224, max= 384, per=5.21%, avg=341.60, stdev=62.35, samples=20 00:26:50.382 iops : min= 56, max= 96, avg=85.40, stdev=15.59, samples=20 00:26:50.382 lat (msec) : 250=98.62%, 500=1.38% 00:26:50.382 cpu : usr=97.71%, sys=1.82%, ctx=26, majf=0, minf=9 00:26:50.382 IO depths : 1=4.8%, 2=10.5%, 4=23.1%, 8=53.9%, 16=7.7%, 32=0.0%, >=64=0.0% 00:26:50.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.382 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.383 issued rwts: total=870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.383 filename2: (groupid=0, jobs=1): err= 0: pid=3495745: Thu Jul 25 00:04:19 2024 00:26:50.383 read: IOPS=66, BW=265KiB/s (271kB/s)(2680KiB/10114msec) 00:26:50.383 slat (usec): min=4, max=104, avg=51.43, stdev=22.59 00:26:50.383 clat (msec): min=35, max=298, avg=240.88, stdev=58.61 00:26:50.383 lat (msec): min=35, max=298, avg=240.94, stdev=58.62 00:26:50.383 clat percentiles (msec): 00:26:50.383 | 1.00th=[ 37], 5.00th=[ 101], 10.00th=[ 180], 20.00th=[ 205], 00:26:50.383 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 257], 00:26:50.383 | 70.00th=[ 279], 80.00th=[ 288], 90.00th=[ 288], 95.00th=[ 292], 00:26:50.383 | 99.00th=[ 300], 99.50th=[ 300], 99.90th=[ 300], 99.95th=[ 300], 00:26:50.383 | 99.99th=[ 300] 00:26:50.383 bw ( KiB/s): min= 128, max= 513, per=3.99%, avg=261.65, stdev=76.34, samples=20 00:26:50.383 iops : min= 32, max= 128, avg=65.40, stdev=19.04, samples=20 00:26:50.383 lat (msec) : 50=2.39%, 100=2.39%, 250=33.13%, 500=62.09% 00:26:50.383 cpu : usr=98.18%, sys=1.32%, ctx=30, majf=0, minf=9 00:26:50.383 IO depths : 1=5.4%, 2=11.5%, 4=24.5%, 8=51.6%, 16=7.0%, 32=0.0%, >=64=0.0% 00:26:50.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.383 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.383 issued rwts: total=670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.383 filename2: (groupid=0, jobs=1): err= 0: pid=3495746: Thu Jul 25 00:04:19 2024 00:26:50.383 read: IOPS=87, BW=348KiB/s (356kB/s)(3520KiB/10114msec) 00:26:50.383 slat (nsec): min=5971, max=87901, avg=13383.05, stdev=12116.84 00:26:50.383 clat (msec): min=90, max=327, avg=183.21, stdev=32.77 00:26:50.383 lat (msec): min=90, max=327, avg=183.22, stdev=32.77 00:26:50.383 clat percentiles (msec): 00:26:50.383 | 1.00th=[ 91], 5.00th=[ 136], 10.00th=[ 148], 20.00th=[ 169], 00:26:50.383 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 186], 00:26:50.383 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 218], 95.00th=[ 253], 00:26:50.383 | 99.00th=[ 288], 99.50th=[ 313], 99.90th=[ 330], 99.95th=[ 330], 00:26:50.383 | 99.99th=[ 330] 00:26:50.383 bw ( KiB/s): min= 256, max= 384, per=5.27%, avg=345.60, stdev=41.01, samples=20 00:26:50.383 iops : min= 64, max= 96, avg=86.40, stdev=10.25, samples=20 00:26:50.383 lat (msec) : 100=1.82%, 250=92.73%, 500=5.45% 00:26:50.383 cpu : usr=98.03%, sys=1.45%, ctx=15, majf=0, minf=9 00:26:50.383 IO depths : 1=0.3%, 2=1.1%, 4=8.3%, 8=77.8%, 16=12.4%, 32=0.0%, >=64=0.0% 00:26:50.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.383 complete : 0=0.0%, 4=89.3%, 8=5.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.383 issued rwts: total=880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.383 filename2: (groupid=0, jobs=1): err= 0: pid=3495747: Thu Jul 25 00:04:19 2024 00:26:50.383 read: IOPS=58, BW=235KiB/s (241kB/s)(2368KiB/10077msec) 00:26:50.383 slat (usec): min=21, max=105, avg=71.24, stdev=10.30 00:26:50.383 clat (msec): min=236, max=365, avg=271.70, stdev=26.98 00:26:50.383 lat (msec): min=236, max=365, avg=271.77, stdev=26.98 00:26:50.383 clat percentiles (msec): 00:26:50.383 | 1.00th=[ 236], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 249], 00:26:50.383 | 30.00th=[ 253], 40.00th=[ 253], 50.00th=[ 275], 60.00th=[ 288], 00:26:50.383 | 70.00th=[ 288], 80.00th=[ 288], 90.00th=[ 296], 95.00th=[ 317], 00:26:50.383 | 99.00th=[ 368], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:26:50.383 | 99.99th=[ 368] 00:26:50.383 bw ( KiB/s): min= 128, max= 256, per=3.51%, avg=230.40, stdev=52.53, samples=20 00:26:50.383 iops : min= 32, max= 64, avg=57.60, stdev=13.13, samples=20 00:26:50.383 lat (msec) : 250=22.13%, 500=77.87% 00:26:50.383 cpu : usr=96.42%, sys=2.18%, ctx=77, majf=0, minf=9 00:26:50.383 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:50.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.383 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.383 issued rwts: total=592,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.383 filename2: (groupid=0, jobs=1): err= 0: pid=3495748: Thu Jul 25 00:04:19 2024 00:26:50.383 read: IOPS=73, BW=295KiB/s (302kB/s)(2976KiB/10091msec) 00:26:50.383 slat (usec): min=13, max=121, avg=63.44, stdev=17.22 00:26:50.383 clat (msec): min=119, max=398, avg=216.00, stdev=51.62 00:26:50.383 lat (msec): min=119, max=398, avg=216.07, stdev=51.62 00:26:50.383 clat percentiles (msec): 00:26:50.383 | 1.00th=[ 121], 5.00th=[ 138], 10.00th=[ 169], 20.00th=[ 176], 00:26:50.383 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 203], 60.00th=[ 230], 00:26:50.383 | 70.00th=[ 243], 80.00th=[ 271], 90.00th=[ 284], 95.00th=[ 300], 00:26:50.383 | 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 397], 99.95th=[ 397], 00:26:50.383 | 99.99th=[ 397] 00:26:50.383 bw ( KiB/s): min= 144, max= 432, per=4.44%, avg=291.20, stdev=62.85, samples=20 00:26:50.383 iops : min= 36, max= 108, avg=72.80, stdev=15.71, samples=20 00:26:50.383 lat (msec) : 250=72.31%, 500=27.69% 00:26:50.383 cpu : usr=97.95%, sys=1.57%, ctx=57, majf=0, minf=9 00:26:50.383 IO depths : 1=1.3%, 2=3.6%, 4=12.5%, 8=71.0%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:50.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.383 complete : 0=0.0%, 4=90.5%, 8=4.5%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.383 issued rwts: total=744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.383 filename2: (groupid=0, jobs=1): err= 0: pid=3495749: Thu Jul 25 00:04:19 2024 00:26:50.383 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10095msec) 00:26:50.383 slat (usec): min=11, max=102, avg=54.67, stdev=23.73 00:26:50.383 clat (msec): min=156, max=379, avg=265.17, stdev=34.64 00:26:50.383 lat (msec): min=156, max=379, avg=265.23, stdev=34.64 00:26:50.383 clat percentiles (msec): 00:26:50.383 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 239], 20.00th=[ 251], 00:26:50.383 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 259], 60.00th=[ 279], 00:26:50.383 | 70.00th=[ 284], 80.00th=[ 288], 90.00th=[ 292], 95.00th=[ 326], 00:26:50.383 | 99.00th=[ 338], 99.50th=[ 372], 99.90th=[ 380], 99.95th=[ 380], 00:26:50.383 | 99.99th=[ 380] 00:26:50.383 bw ( KiB/s): min= 128, max= 384, per=3.60%, avg=236.80, stdev=62.64, samples=20 00:26:50.383 iops : min= 32, max= 96, avg=59.20, stdev=15.66, samples=20 00:26:50.383 lat (msec) : 250=20.56%, 500=79.44% 00:26:50.383 cpu : usr=98.31%, sys=1.30%, ctx=10, majf=0, minf=9 00:26:50.383 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:50.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.383 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:50.383 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:50.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:50.383 00:26:50.383 Run status group 0 (all jobs): 00:26:50.383 READ: bw=6549KiB/s (6707kB/s), 234KiB/s-364KiB/s (240kB/s-373kB/s), io=64.7MiB (67.8MB), run=10053-10115msec 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.383 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.384 bdev_null0 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.384 [2024-07-25 00:04:19.972732] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.384 bdev_null1 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.384 00:04:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # shift 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:50.384 { 00:26:50.384 "params": { 00:26:50.384 "name": "Nvme$subsystem", 00:26:50.384 "trtype": "$TEST_TRANSPORT", 00:26:50.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.384 "adrfam": "ipv4", 00:26:50.384 "trsvcid": "$NVMF_PORT", 00:26:50.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.384 "hdgst": ${hdgst:-false}, 00:26:50.384 "ddgst": ${ddgst:-false} 00:26:50.384 }, 00:26:50.384 "method": "bdev_nvme_attach_controller" 00:26:50.384 } 00:26:50.384 EOF 00:26:50.384 )") 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libasan 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:50.384 { 00:26:50.384 "params": { 00:26:50.384 "name": "Nvme$subsystem", 00:26:50.384 "trtype": "$TEST_TRANSPORT", 00:26:50.384 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:50.384 "adrfam": "ipv4", 00:26:50.384 "trsvcid": "$NVMF_PORT", 00:26:50.384 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:50.384 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:50.384 "hdgst": ${hdgst:-false}, 00:26:50.384 "ddgst": ${ddgst:-false} 00:26:50.384 }, 00:26:50.384 "method": "bdev_nvme_attach_controller" 00:26:50.384 } 00:26:50.384 EOF 00:26:50.384 )") 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:50.384 00:04:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:50.384 "params": { 00:26:50.384 "name": "Nvme0", 00:26:50.384 "trtype": "tcp", 00:26:50.384 "traddr": "10.0.0.2", 00:26:50.384 "adrfam": "ipv4", 00:26:50.384 "trsvcid": "4420", 00:26:50.384 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:50.384 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:50.384 "hdgst": false, 00:26:50.384 "ddgst": false 00:26:50.384 }, 00:26:50.384 "method": "bdev_nvme_attach_controller" 00:26:50.384 },{ 00:26:50.384 "params": { 00:26:50.384 "name": "Nvme1", 00:26:50.384 "trtype": "tcp", 00:26:50.384 "traddr": "10.0.0.2", 00:26:50.384 "adrfam": "ipv4", 00:26:50.384 "trsvcid": "4420", 00:26:50.384 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:50.384 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:50.384 "hdgst": false, 00:26:50.384 "ddgst": false 00:26:50.384 }, 00:26:50.384 "method": "bdev_nvme_attach_controller" 00:26:50.384 }' 00:26:50.385 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:50.385 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:50.385 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:50.385 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:50.385 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:50.385 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:50.385 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:50.385 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:50.385 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:50.385 00:04:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:50.385 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:50.385 ... 00:26:50.385 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:50.385 ... 00:26:50.385 fio-3.35 00:26:50.385 Starting 4 threads 00:26:50.385 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.643 00:26:55.643 filename0: (groupid=0, jobs=1): err= 0: pid=3497143: Thu Jul 25 00:04:25 2024 00:26:55.643 read: IOPS=1804, BW=14.1MiB/s (14.8MB/s)(70.5MiB/5002msec) 00:26:55.643 slat (nsec): min=4356, max=90460, avg=19757.61, stdev=10461.43 00:26:55.643 clat (usec): min=896, max=8115, avg=4360.48, stdev=474.92 00:26:55.643 lat (usec): min=908, max=8128, avg=4380.24, stdev=474.83 00:26:55.643 clat percentiles (usec): 00:26:55.643 | 1.00th=[ 3294], 5.00th=[ 3818], 10.00th=[ 3982], 20.00th=[ 4113], 00:26:55.643 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:26:55.643 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 5014], 00:26:55.643 | 99.00th=[ 6390], 99.50th=[ 6652], 99.90th=[ 7308], 99.95th=[ 7439], 00:26:55.643 | 99.99th=[ 8094] 00:26:55.643 bw ( KiB/s): min=13824, max=15136, per=24.99%, avg=14439.60, stdev=430.87, samples=10 00:26:55.643 iops : min= 1728, max= 1892, avg=1804.90, stdev=53.88, samples=10 00:26:55.643 lat (usec) : 1000=0.03% 00:26:55.643 lat (msec) : 2=0.12%, 4=10.15%, 10=89.70% 00:26:55.643 cpu : usr=93.40%, sys=6.02%, ctx=12, majf=0, minf=63 00:26:55.643 IO depths : 1=0.1%, 2=14.2%, 4=59.2%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.643 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.643 issued rwts: total=9028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:55.643 filename0: (groupid=0, jobs=1): err= 0: pid=3497144: Thu Jul 25 00:04:25 2024 00:26:55.643 read: IOPS=1808, BW=14.1MiB/s (14.8MB/s)(70.7MiB/5003msec) 00:26:55.643 slat (nsec): min=4422, max=83029, avg=19504.59, stdev=8344.83 00:26:55.643 clat (usec): min=1187, max=7897, avg=4360.08, stdev=510.11 00:26:55.643 lat (usec): min=1207, max=7913, avg=4379.58, stdev=509.97 00:26:55.643 clat percentiles (usec): 00:26:55.643 | 1.00th=[ 3130], 5.00th=[ 3687], 10.00th=[ 3949], 20.00th=[ 4113], 00:26:55.643 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:26:55.643 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4948], 00:26:55.643 | 99.00th=[ 6521], 99.50th=[ 6980], 99.90th=[ 7701], 99.95th=[ 7832], 00:26:55.643 | 99.99th=[ 7898] 00:26:55.643 bw ( KiB/s): min=13824, max=15184, per=25.05%, avg=14470.40, stdev=431.58, samples=10 00:26:55.643 iops : min= 1728, max= 1898, avg=1808.80, stdev=53.95, samples=10 00:26:55.643 lat (msec) : 2=0.09%, 4=11.33%, 10=88.58% 00:26:55.643 cpu : usr=94.92%, sys=4.60%, ctx=17, majf=0, minf=33 00:26:55.643 IO depths : 1=0.1%, 2=9.6%, 4=64.0%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.643 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.643 issued rwts: total=9049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.643 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:55.643 filename1: (groupid=0, jobs=1): err= 0: pid=3497145: Thu Jul 25 00:04:25 2024 00:26:55.643 read: IOPS=1799, BW=14.1MiB/s (14.7MB/s)(70.3MiB/5001msec) 00:26:55.643 slat (nsec): min=4281, max=90399, avg=19021.00, stdev=9956.78 00:26:55.643 clat (usec): min=813, max=8136, avg=4375.24, stdev=544.16 00:26:55.643 lat (usec): min=821, max=8157, avg=4394.26, stdev=543.93 00:26:55.644 clat percentiles (usec): 00:26:55.644 | 1.00th=[ 3097], 5.00th=[ 3785], 10.00th=[ 4015], 20.00th=[ 4146], 00:26:55.644 | 30.00th=[ 4228], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:26:55.644 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 5080], 00:26:55.644 | 99.00th=[ 6718], 99.50th=[ 7046], 99.90th=[ 7635], 99.95th=[ 7832], 00:26:55.644 | 99.99th=[ 8160] 00:26:55.644 bw ( KiB/s): min=13840, max=15152, per=24.93%, avg=14403.56, stdev=467.15, samples=9 00:26:55.644 iops : min= 1730, max= 1894, avg=1800.44, stdev=58.39, samples=9 00:26:55.644 lat (usec) : 1000=0.04% 00:26:55.644 lat (msec) : 2=0.19%, 4=9.73%, 10=90.03% 00:26:55.644 cpu : usr=93.98%, sys=5.48%, ctx=16, majf=0, minf=58 00:26:55.644 IO depths : 1=0.1%, 2=14.9%, 4=58.1%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.644 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.644 issued rwts: total=9001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.644 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:55.644 filename1: (groupid=0, jobs=1): err= 0: pid=3497146: Thu Jul 25 00:04:25 2024 00:26:55.644 read: IOPS=1809, BW=14.1MiB/s (14.8MB/s)(70.8MiB/5004msec) 00:26:55.644 slat (nsec): min=6386, max=75038, avg=19735.38, stdev=10244.62 00:26:55.644 clat (usec): min=1223, max=8589, avg=4347.66, stdev=491.33 00:26:55.644 lat (usec): min=1236, max=8628, avg=4367.40, stdev=491.27 00:26:55.644 clat percentiles (usec): 00:26:55.644 | 1.00th=[ 3097], 5.00th=[ 3720], 10.00th=[ 3949], 20.00th=[ 4113], 00:26:55.644 | 30.00th=[ 4178], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:26:55.644 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 5014], 00:26:55.644 | 99.00th=[ 6194], 99.50th=[ 6652], 99.90th=[ 7832], 99.95th=[ 8160], 00:26:55.644 | 99.99th=[ 8586] 00:26:55.644 bw ( KiB/s): min=13824, max=15104, per=25.07%, avg=14483.20, stdev=398.38, samples=10 00:26:55.644 iops : min= 1728, max= 1888, avg=1810.40, stdev=49.80, samples=10 00:26:55.644 lat (msec) : 2=0.10%, 4=11.38%, 10=88.52% 00:26:55.644 cpu : usr=93.56%, sys=5.96%, ctx=15, majf=0, minf=33 00:26:55.644 IO depths : 1=0.1%, 2=14.7%, 4=58.8%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:55.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.644 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:55.644 issued rwts: total=9057,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:55.644 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:55.644 00:26:55.644 Run status group 0 (all jobs): 00:26:55.644 READ: bw=56.4MiB/s (59.2MB/s), 14.1MiB/s-14.1MiB/s (14.7MB/s-14.8MB/s), io=282MiB (296MB), run=5001-5004msec 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.942 00:26:55.942 real 0m24.566s 00:26:55.942 user 4m33.402s 00:26:55.942 sys 0m7.401s 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:55.942 00:04:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:55.942 ************************************ 00:26:55.942 END TEST fio_dif_rand_params 00:26:55.942 ************************************ 00:26:55.942 00:04:26 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:55.942 00:04:26 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:55.942 00:04:26 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:55.942 00:04:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:55.942 ************************************ 00:26:55.942 START TEST fio_dif_digest 00:26:55.942 ************************************ 00:26:55.942 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:26:55.942 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:55.942 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:55.942 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:55.942 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:55.942 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:55.942 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:55.942 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:55.942 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:55.943 bdev_null0 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:55.943 [2024-07-25 00:04:26.366192] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:55.943 { 00:26:55.943 "params": { 00:26:55.943 "name": "Nvme$subsystem", 00:26:55.943 "trtype": "$TEST_TRANSPORT", 00:26:55.943 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:55.943 "adrfam": "ipv4", 00:26:55.943 "trsvcid": "$NVMF_PORT", 00:26:55.943 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:55.943 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:55.943 "hdgst": ${hdgst:-false}, 00:26:55.943 "ddgst": ${ddgst:-false} 00:26:55.943 }, 00:26:55.943 "method": "bdev_nvme_attach_controller" 00:26:55.943 } 00:26:55.943 EOF 00:26:55.943 )") 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1335 -- # local fio_dir=/usr/src/fio 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local sanitizers 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # shift 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local asan_lib= 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libasan 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:55.943 "params": { 00:26:55.943 "name": "Nvme0", 00:26:55.943 "trtype": "tcp", 00:26:55.943 "traddr": "10.0.0.2", 00:26:55.943 "adrfam": "ipv4", 00:26:55.943 "trsvcid": "4420", 00:26:55.943 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:55.943 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:55.943 "hdgst": true, 00:26:55.943 "ddgst": true 00:26:55.943 }, 00:26:55.943 "method": "bdev_nvme_attach_controller" 00:26:55.943 }' 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # for sanitizer in "${sanitizers[@]}" 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # grep libclang_rt.asan 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # awk '{print $3}' 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # asan_lib= 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # [[ -n '' ]] 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:55.943 00:04:26 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:56.202 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:56.202 ... 00:26:56.202 fio-3.35 00:26:56.203 Starting 3 threads 00:26:56.203 EAL: No free 2048 kB hugepages reported on node 1 00:27:08.391 00:27:08.391 filename0: (groupid=0, jobs=1): err= 0: pid=3498003: Thu Jul 25 00:04:37 2024 00:27:08.391 read: IOPS=202, BW=25.3MiB/s (26.5MB/s)(254MiB/10048msec) 00:27:08.391 slat (nsec): min=5184, max=44990, avg=16977.94, stdev=4356.76 00:27:08.391 clat (usec): min=8447, max=57865, avg=14805.82, stdev=2928.22 00:27:08.391 lat (usec): min=8461, max=57884, avg=14822.80, stdev=2928.19 00:27:08.391 clat percentiles (usec): 00:27:08.391 | 1.00th=[ 9896], 5.00th=[12649], 10.00th=[13304], 20.00th=[13829], 00:27:08.391 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:27:08.391 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16581], 00:27:08.391 | 99.00th=[17433], 99.50th=[21103], 99.90th=[57410], 99.95th=[57410], 00:27:08.391 | 99.99th=[57934] 00:27:08.391 bw ( KiB/s): min=23552, max=27904, per=32.63%, avg=25960.85, stdev=1028.65, samples=20 00:27:08.391 iops : min= 184, max= 218, avg=202.80, stdev= 8.06, samples=20 00:27:08.391 lat (msec) : 10=1.58%, 20=97.88%, 50=0.20%, 100=0.34% 00:27:08.391 cpu : usr=93.18%, sys=5.85%, ctx=152, majf=0, minf=83 00:27:08.391 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:08.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.391 issued rwts: total=2030,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:08.391 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:08.391 filename0: (groupid=0, jobs=1): err= 0: pid=3498004: Thu Jul 25 00:04:37 2024 00:27:08.391 read: IOPS=214, BW=26.8MiB/s (28.1MB/s)(269MiB/10047msec) 00:27:08.391 slat (nsec): min=4929, max=67536, avg=20726.01, stdev=6947.94 00:27:08.391 clat (usec): min=8537, max=96996, avg=13941.89, stdev=4426.08 00:27:08.391 lat (usec): min=8556, max=97016, avg=13962.61, stdev=4425.94 00:27:08.391 clat percentiles (usec): 00:27:08.391 | 1.00th=[10421], 5.00th=[11863], 10.00th=[12256], 20.00th=[12780], 00:27:08.391 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:27:08.391 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14746], 95.00th=[15139], 00:27:08.391 | 99.00th=[18220], 99.50th=[54264], 99.90th=[55837], 99.95th=[95945], 00:27:08.391 | 99.99th=[96994] 00:27:08.391 bw ( KiB/s): min=23808, max=29184, per=34.64%, avg=27558.40, stdev=1378.04, samples=20 00:27:08.391 iops : min= 186, max= 228, avg=215.30, stdev=10.77, samples=20 00:27:08.391 lat (msec) : 10=0.84%, 20=98.28%, 50=0.09%, 100=0.79% 00:27:08.391 cpu : usr=86.82%, sys=9.13%, ctx=794, majf=0, minf=159 00:27:08.391 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:08.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.391 issued rwts: total=2155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:08.391 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:08.391 filename0: (groupid=0, jobs=1): err= 0: pid=3498005: Thu Jul 25 00:04:37 2024 00:27:08.391 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(258MiB/10006msec) 00:27:08.391 slat (nsec): min=5146, max=75425, avg=17209.79, stdev=4539.88 00:27:08.391 clat (usec): min=6407, max=24118, avg=14542.15, stdev=1455.88 00:27:08.391 lat (usec): min=6426, max=24145, avg=14559.36, stdev=1456.00 00:27:08.391 clat percentiles (usec): 00:27:08.391 | 1.00th=[ 9634], 5.00th=[11863], 10.00th=[13042], 20.00th=[13698], 00:27:08.391 | 30.00th=[14091], 40.00th=[14353], 50.00th=[14615], 60.00th=[14877], 00:27:08.391 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16057], 95.00th=[16450], 00:27:08.391 | 99.00th=[17171], 99.50th=[17695], 99.90th=[21627], 99.95th=[21627], 00:27:08.391 | 99.99th=[24249] 00:27:08.391 bw ( KiB/s): min=25088, max=27648, per=33.21%, avg=26421.89, stdev=691.04, samples=19 00:27:08.391 iops : min= 196, max= 216, avg=206.42, stdev= 5.40, samples=19 00:27:08.391 lat (msec) : 10=1.94%, 20=97.91%, 50=0.15% 00:27:08.391 cpu : usr=93.73%, sys=5.65%, ctx=18, majf=0, minf=140 00:27:08.391 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:08.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:08.391 issued rwts: total=2061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:08.391 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:08.391 00:27:08.391 Run status group 0 (all jobs): 00:27:08.391 READ: bw=77.7MiB/s (81.5MB/s), 25.3MiB/s-26.8MiB/s (26.5MB/s-28.1MB/s), io=781MiB (819MB), run=10006-10048msec 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:08.391 00:27:08.391 real 0m11.044s 00:27:08.391 user 0m28.553s 00:27:08.391 sys 0m2.349s 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:08.391 00:04:37 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:27:08.391 ************************************ 00:27:08.391 END TEST fio_dif_digest 00:27:08.391 ************************************ 00:27:08.391 00:04:37 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:27:08.391 00:04:37 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:27:08.391 00:04:37 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:08.391 00:04:37 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:27:08.391 00:04:37 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:08.391 00:04:37 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:27:08.391 00:04:37 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:08.391 00:04:37 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:08.391 rmmod nvme_tcp 00:27:08.391 rmmod nvme_fabrics 00:27:08.391 rmmod nvme_keyring 00:27:08.391 00:04:37 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:08.391 00:04:37 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:27:08.391 00:04:37 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:27:08.391 00:04:37 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 3491696 ']' 00:27:08.391 00:04:37 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 3491696 00:27:08.391 00:04:37 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 3491696 ']' 00:27:08.391 00:04:37 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 3491696 00:27:08.391 00:04:37 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:27:08.391 00:04:37 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:08.391 00:04:37 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3491696 00:27:08.391 00:04:37 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:08.391 00:04:37 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:08.391 00:04:37 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3491696' 00:27:08.391 killing process with pid 3491696 00:27:08.391 00:04:37 nvmf_dif -- common/autotest_common.sh@967 -- # kill 3491696 00:27:08.391 00:04:37 nvmf_dif -- common/autotest_common.sh@972 -- # wait 3491696 00:27:08.391 00:04:37 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:08.391 00:04:37 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:08.391 Waiting for block devices as requested 00:27:08.391 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:08.391 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:08.649 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:08.649 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:08.649 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:08.649 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:08.906 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:08.906 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:08.906 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:08.906 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:09.164 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:09.164 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:09.164 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:09.422 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:09.422 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:09.422 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:09.422 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:09.680 00:04:40 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:09.680 00:04:40 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:09.680 00:04:40 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:09.680 00:04:40 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:09.680 00:04:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:09.680 00:04:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:09.680 00:04:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.580 00:04:42 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:11.580 00:27:11.580 real 1m7.610s 00:27:11.580 user 6m30.668s 00:27:11.580 sys 0m18.881s 00:27:11.580 00:04:42 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:11.580 00:04:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:11.580 ************************************ 00:27:11.580 END TEST nvmf_dif 00:27:11.580 ************************************ 00:27:11.580 00:04:42 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:11.580 00:04:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:11.580 00:04:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:11.580 00:04:42 -- common/autotest_common.sh@10 -- # set +x 00:27:11.838 ************************************ 00:27:11.838 START TEST nvmf_abort_qd_sizes 00:27:11.838 ************************************ 00:27:11.838 00:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:27:11.838 * Looking for test storage... 00:27:11.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:11.838 00:04:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:11.838 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:27:11.838 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:11.838 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:11.838 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:11.838 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:11.838 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:27:11.839 00:04:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.739 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:13.740 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:13.740 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:13.740 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:13.740 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:13.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:27:13.740 00:27:13.740 --- 10.0.0.2 ping statistics --- 00:27:13.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.740 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:13.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:27:13.740 00:27:13.740 --- 10.0.0.1 ping statistics --- 00:27:13.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.740 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:27:13.740 00:04:44 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:15.114 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:15.114 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:15.114 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:15.114 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:15.114 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:15.114 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:15.114 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:15.114 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:15.114 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:15.114 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:15.114 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:15.114 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:15.114 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:15.114 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:15.114 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:15.114 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:16.047 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=3502801 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 3502801 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 3502801 ']' 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:16.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:16.047 00:04:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:16.047 [2024-07-25 00:04:46.578648] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:27:16.047 [2024-07-25 00:04:46.578723] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:16.047 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.047 [2024-07-25 00:04:46.649098] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:16.305 [2024-07-25 00:04:46.772790] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:16.305 [2024-07-25 00:04:46.772848] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:16.305 [2024-07-25 00:04:46.772877] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:16.305 [2024-07-25 00:04:46.772892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:16.305 [2024-07-25 00:04:46.772903] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:16.305 [2024-07-25 00:04:46.776265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.305 [2024-07-25 00:04:46.776300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:16.305 [2024-07-25 00:04:46.776417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:16.305 [2024-07-25 00:04:46.776420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:17.237 00:04:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:17.238 00:04:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:17.238 ************************************ 00:27:17.238 START TEST spdk_target_abort 00:27:17.238 ************************************ 00:27:17.238 00:04:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:27:17.238 00:04:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:27:17.238 00:04:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:27:17.238 00:04:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:17.238 00:04:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.516 spdk_targetn1 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.516 [2024-07-25 00:04:50.405440] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:20.516 [2024-07-25 00:04:50.437739] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:20.516 00:04:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:20.516 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.047 Initializing NVMe Controllers 00:27:23.047 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:23.047 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:23.047 Initialization complete. Launching workers. 00:27:23.047 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12750, failed: 0 00:27:23.047 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1288, failed to submit 11462 00:27:23.047 success 745, unsuccess 543, failed 0 00:27:23.047 00:04:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:23.047 00:04:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:23.306 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.613 Initializing NVMe Controllers 00:27:26.613 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:26.613 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:26.613 Initialization complete. Launching workers. 00:27:26.613 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8518, failed: 0 00:27:26.613 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1231, failed to submit 7287 00:27:26.613 success 342, unsuccess 889, failed 0 00:27:26.613 00:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:26.613 00:04:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:26.613 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.901 Initializing NVMe Controllers 00:27:29.901 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:29.901 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:29.901 Initialization complete. Launching workers. 00:27:29.901 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29124, failed: 0 00:27:29.901 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2665, failed to submit 26459 00:27:29.901 success 474, unsuccess 2191, failed 0 00:27:29.901 00:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:29.901 00:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.901 00:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:29.901 00:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.901 00:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:29.901 00:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.901 00:05:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:30.837 00:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.837 00:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3502801 00:27:30.837 00:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 3502801 ']' 00:27:30.837 00:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 3502801 00:27:30.837 00:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:27:30.837 00:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:30.837 00:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3502801 00:27:31.095 00:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:31.095 00:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:31.095 00:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3502801' 00:27:31.095 killing process with pid 3502801 00:27:31.095 00:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 3502801 00:27:31.095 00:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 3502801 00:27:31.354 00:27:31.354 real 0m14.156s 00:27:31.354 user 0m55.839s 00:27:31.354 sys 0m2.680s 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:31.354 ************************************ 00:27:31.354 END TEST spdk_target_abort 00:27:31.354 ************************************ 00:27:31.354 00:05:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:31.354 00:05:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:31.354 00:05:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:31.354 00:05:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:31.354 ************************************ 00:27:31.354 START TEST kernel_target_abort 00:27:31.354 ************************************ 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:31.354 00:05:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:32.290 Waiting for block devices as requested 00:27:32.290 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:32.548 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:32.548 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:32.548 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:32.807 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:32.807 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:32.807 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:32.807 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:33.064 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:33.064 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:33.064 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:33.064 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:33.322 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:33.322 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:33.322 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:33.322 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:33.322 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:33.580 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:33.580 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:33.580 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:33.580 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1660 -- # local device=nvme0n1 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1663 -- # [[ none != none ]] 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:33.581 No valid GPT data, bailing 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:27:33.581 00:27:33.581 Discovery Log Number of Records 2, Generation counter 2 00:27:33.581 =====Discovery Log Entry 0====== 00:27:33.581 trtype: tcp 00:27:33.581 adrfam: ipv4 00:27:33.581 subtype: current discovery subsystem 00:27:33.581 treq: not specified, sq flow control disable supported 00:27:33.581 portid: 1 00:27:33.581 trsvcid: 4420 00:27:33.581 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:33.581 traddr: 10.0.0.1 00:27:33.581 eflags: none 00:27:33.581 sectype: none 00:27:33.581 =====Discovery Log Entry 1====== 00:27:33.581 trtype: tcp 00:27:33.581 adrfam: ipv4 00:27:33.581 subtype: nvme subsystem 00:27:33.581 treq: not specified, sq flow control disable supported 00:27:33.581 portid: 1 00:27:33.581 trsvcid: 4420 00:27:33.581 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:33.581 traddr: 10.0.0.1 00:27:33.581 eflags: none 00:27:33.581 sectype: none 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:33.581 00:05:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:33.841 EAL: No free 2048 kB hugepages reported on node 1 00:27:37.130 Initializing NVMe Controllers 00:27:37.130 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:37.130 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:37.130 Initialization complete. Launching workers. 00:27:37.130 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 35243, failed: 0 00:27:37.130 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35243, failed to submit 0 00:27:37.130 success 0, unsuccess 35243, failed 0 00:27:37.130 00:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:37.130 00:05:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:37.130 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.418 Initializing NVMe Controllers 00:27:40.418 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:40.418 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:40.418 Initialization complete. Launching workers. 00:27:40.418 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 73084, failed: 0 00:27:40.418 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18434, failed to submit 54650 00:27:40.418 success 0, unsuccess 18434, failed 0 00:27:40.418 00:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:40.418 00:05:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:40.418 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.952 Initializing NVMe Controllers 00:27:42.952 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:42.952 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:42.952 Initialization complete. Launching workers. 00:27:42.952 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66378, failed: 0 00:27:42.952 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16582, failed to submit 49796 00:27:42.952 success 0, unsuccess 16582, failed 0 00:27:42.952 00:05:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:42.952 00:05:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:42.952 00:05:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:42.952 00:05:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:42.952 00:05:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:42.952 00:05:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:42.952 00:05:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:42.952 00:05:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:42.952 00:05:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:43.209 00:05:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:44.146 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:44.146 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:44.146 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:44.146 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:44.146 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:44.146 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:44.146 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:44.146 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:44.146 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:44.146 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:44.146 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:44.146 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:44.146 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:44.146 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:44.146 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:44.405 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:45.344 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:45.344 00:27:45.344 real 0m13.997s 00:27:45.344 user 0m5.488s 00:27:45.344 sys 0m3.248s 00:27:45.344 00:05:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:45.344 00:05:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:45.344 ************************************ 00:27:45.344 END TEST kernel_target_abort 00:27:45.344 ************************************ 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:45.344 rmmod nvme_tcp 00:27:45.344 rmmod nvme_fabrics 00:27:45.344 rmmod nvme_keyring 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 3502801 ']' 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 3502801 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 3502801 ']' 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 3502801 00:27:45.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (3502801) - No such process 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 3502801 is not found' 00:27:45.344 Process with pid 3502801 is not found 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:45.344 00:05:15 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:46.278 Waiting for block devices as requested 00:27:46.539 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:46.539 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:46.800 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:46.800 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:46.800 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:46.800 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:47.059 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:47.059 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:47.059 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:47.059 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:47.341 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:47.341 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:47.341 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:47.341 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:47.599 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:47.599 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:47.599 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:47.599 00:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:47.599 00:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:47.599 00:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:47.599 00:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:47.599 00:05:18 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.599 00:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:47.599 00:05:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.136 00:05:20 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:50.136 00:27:50.136 real 0m38.013s 00:27:50.136 user 1m3.536s 00:27:50.136 sys 0m9.275s 00:27:50.136 00:05:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:50.136 00:05:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:50.136 ************************************ 00:27:50.136 END TEST nvmf_abort_qd_sizes 00:27:50.136 ************************************ 00:27:50.136 00:05:20 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:50.136 00:05:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:50.136 00:05:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:50.136 00:05:20 -- common/autotest_common.sh@10 -- # set +x 00:27:50.136 ************************************ 00:27:50.136 START TEST keyring_file 00:27:50.136 ************************************ 00:27:50.136 00:05:20 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:50.136 * Looking for test storage... 00:27:50.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:50.136 00:05:20 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:50.136 00:05:20 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.136 00:05:20 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.136 00:05:20 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.136 00:05:20 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.136 00:05:20 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.136 00:05:20 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.136 00:05:20 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.136 00:05:20 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.136 00:05:20 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:50.137 00:05:20 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:50.137 00:05:20 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:50.137 00:05:20 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:50.137 00:05:20 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:50.137 00:05:20 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:50.137 00:05:20 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:50.137 00:05:20 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TnT0nMfGDV 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TnT0nMfGDV 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TnT0nMfGDV 00:27:50.137 00:05:20 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.TnT0nMfGDV 00:27:50.137 00:05:20 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.0SNiJccc8T 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:50.137 00:05:20 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.0SNiJccc8T 00:27:50.137 00:05:20 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.0SNiJccc8T 00:27:50.137 00:05:20 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.0SNiJccc8T 00:27:50.137 00:05:20 keyring_file -- keyring/file.sh@30 -- # tgtpid=3508571 00:27:50.137 00:05:20 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:50.137 00:05:20 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3508571 00:27:50.137 00:05:20 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3508571 ']' 00:27:50.137 00:05:20 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.137 00:05:20 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:50.137 00:05:20 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.137 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.137 00:05:20 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:50.137 00:05:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:50.137 [2024-07-25 00:05:20.464956] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:27:50.137 [2024-07-25 00:05:20.465044] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3508571 ] 00:27:50.137 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.137 [2024-07-25 00:05:20.522366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.137 [2024-07-25 00:05:20.638578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:51.073 00:05:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:51.073 [2024-07-25 00:05:21.405059] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.073 null0 00:27:51.073 [2024-07-25 00:05:21.437075] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:51.073 [2024-07-25 00:05:21.437564] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:51.073 [2024-07-25 00:05:21.445067] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:51.073 00:05:21 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:51.073 [2024-07-25 00:05:21.457078] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:51.073 request: 00:27:51.073 { 00:27:51.073 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:51.073 "secure_channel": false, 00:27:51.073 "listen_address": { 00:27:51.073 "trtype": "tcp", 00:27:51.073 "traddr": "127.0.0.1", 00:27:51.073 "trsvcid": "4420" 00:27:51.073 }, 00:27:51.073 "method": "nvmf_subsystem_add_listener", 00:27:51.073 "req_id": 1 00:27:51.073 } 00:27:51.073 Got JSON-RPC error response 00:27:51.073 response: 00:27:51.073 { 00:27:51.073 "code": -32602, 00:27:51.073 "message": "Invalid parameters" 00:27:51.073 } 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:51.073 00:05:21 keyring_file -- keyring/file.sh@46 -- # bperfpid=3508706 00:27:51.073 00:05:21 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:51.073 00:05:21 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3508706 /var/tmp/bperf.sock 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3508706 ']' 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:51.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:51.073 00:05:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:51.073 [2024-07-25 00:05:21.507278] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:27:51.073 [2024-07-25 00:05:21.507361] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3508706 ] 00:27:51.073 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.073 [2024-07-25 00:05:21.568416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.331 [2024-07-25 00:05:21.692527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.897 00:05:22 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:51.897 00:05:22 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:27:51.897 00:05:22 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TnT0nMfGDV 00:27:51.897 00:05:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TnT0nMfGDV 00:27:52.155 00:05:22 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0SNiJccc8T 00:27:52.155 00:05:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0SNiJccc8T 00:27:52.413 00:05:22 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:52.413 00:05:22 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:52.413 00:05:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:52.413 00:05:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:52.413 00:05:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:52.670 00:05:23 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.TnT0nMfGDV == \/\t\m\p\/\t\m\p\.\T\n\T\0\n\M\f\G\D\V ]] 00:27:52.670 00:05:23 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:52.670 00:05:23 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:52.670 00:05:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:52.670 00:05:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:52.670 00:05:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:52.928 00:05:23 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.0SNiJccc8T == \/\t\m\p\/\t\m\p\.\0\S\N\i\J\c\c\c\8\T ]] 00:27:52.928 00:05:23 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:52.928 00:05:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:52.928 00:05:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:52.928 00:05:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:52.928 00:05:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:52.928 00:05:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:53.185 00:05:23 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:53.185 00:05:23 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:53.185 00:05:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:53.185 00:05:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:53.185 00:05:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:53.185 00:05:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:53.185 00:05:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:53.441 00:05:23 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:53.441 00:05:23 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:53.441 00:05:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:53.698 [2024-07-25 00:05:24.175825] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:53.698 nvme0n1 00:27:53.698 00:05:24 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:53.698 00:05:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:53.698 00:05:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:53.698 00:05:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:53.698 00:05:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:53.698 00:05:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:53.955 00:05:24 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:53.956 00:05:24 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:53.956 00:05:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:53.956 00:05:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:53.956 00:05:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:53.956 00:05:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:53.956 00:05:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:54.213 00:05:24 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:54.213 00:05:24 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:54.473 Running I/O for 1 seconds... 00:27:55.411 00:27:55.411 Latency(us) 00:27:55.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.412 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:55.412 nvme0n1 : 1.01 7371.03 28.79 0.00 0.00 17285.24 7670.14 28156.21 00:27:55.412 =================================================================================================================== 00:27:55.412 Total : 7371.03 28.79 0.00 0.00 17285.24 7670.14 28156.21 00:27:55.412 0 00:27:55.412 00:05:25 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:55.412 00:05:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:55.670 00:05:26 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:55.670 00:05:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:55.670 00:05:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:55.670 00:05:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:55.670 00:05:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:55.670 00:05:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:55.928 00:05:26 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:55.928 00:05:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:55.928 00:05:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:55.928 00:05:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:55.928 00:05:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:55.928 00:05:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:55.928 00:05:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:56.186 00:05:26 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:56.186 00:05:26 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:56.186 00:05:26 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:56.186 00:05:26 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:56.186 00:05:26 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:56.186 00:05:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:56.186 00:05:26 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:56.186 00:05:26 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:56.186 00:05:26 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:56.186 00:05:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:56.444 [2024-07-25 00:05:26.891369] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:56.444 [2024-07-25 00:05:26.891858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199e9a0 (107): Transport endpoint is not connected 00:27:56.444 [2024-07-25 00:05:26.892845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x199e9a0 (9): Bad file descriptor 00:27:56.444 [2024-07-25 00:05:26.893843] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:56.444 [2024-07-25 00:05:26.893864] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:56.444 [2024-07-25 00:05:26.893880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:56.444 request: 00:27:56.444 { 00:27:56.444 "name": "nvme0", 00:27:56.444 "trtype": "tcp", 00:27:56.444 "traddr": "127.0.0.1", 00:27:56.444 "adrfam": "ipv4", 00:27:56.444 "trsvcid": "4420", 00:27:56.444 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:56.444 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:56.444 "prchk_reftag": false, 00:27:56.444 "prchk_guard": false, 00:27:56.444 "hdgst": false, 00:27:56.444 "ddgst": false, 00:27:56.444 "psk": "key1", 00:27:56.444 "method": "bdev_nvme_attach_controller", 00:27:56.444 "req_id": 1 00:27:56.444 } 00:27:56.444 Got JSON-RPC error response 00:27:56.444 response: 00:27:56.444 { 00:27:56.444 "code": -5, 00:27:56.444 "message": "Input/output error" 00:27:56.444 } 00:27:56.444 00:05:26 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:56.444 00:05:26 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:56.444 00:05:26 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:56.444 00:05:26 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:56.444 00:05:26 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:56.444 00:05:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:56.444 00:05:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:56.444 00:05:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:56.444 00:05:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:56.444 00:05:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:56.703 00:05:27 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:56.703 00:05:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:56.703 00:05:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:56.703 00:05:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:56.703 00:05:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:56.703 00:05:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:56.703 00:05:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:56.960 00:05:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:56.960 00:05:27 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:56.960 00:05:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:57.218 00:05:27 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:57.218 00:05:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:57.476 00:05:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:57.476 00:05:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:57.476 00:05:27 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:57.735 00:05:28 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:57.735 00:05:28 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.TnT0nMfGDV 00:27:57.735 00:05:28 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.TnT0nMfGDV 00:27:57.735 00:05:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:57.735 00:05:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.TnT0nMfGDV 00:27:57.735 00:05:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:57.735 00:05:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.735 00:05:28 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:57.735 00:05:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:57.735 00:05:28 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TnT0nMfGDV 00:27:57.735 00:05:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TnT0nMfGDV 00:27:57.993 [2024-07-25 00:05:28.383660] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.TnT0nMfGDV': 0100660 00:27:57.993 [2024-07-25 00:05:28.383713] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:57.993 request: 00:27:57.993 { 00:27:57.993 "name": "key0", 00:27:57.993 "path": "/tmp/tmp.TnT0nMfGDV", 00:27:57.993 "method": "keyring_file_add_key", 00:27:57.993 "req_id": 1 00:27:57.993 } 00:27:57.993 Got JSON-RPC error response 00:27:57.993 response: 00:27:57.993 { 00:27:57.993 "code": -1, 00:27:57.993 "message": "Operation not permitted" 00:27:57.993 } 00:27:57.993 00:05:28 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:57.993 00:05:28 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:57.993 00:05:28 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:57.993 00:05:28 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:57.993 00:05:28 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.TnT0nMfGDV 00:27:57.993 00:05:28 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.TnT0nMfGDV 00:27:57.993 00:05:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.TnT0nMfGDV 00:27:58.251 00:05:28 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.TnT0nMfGDV 00:27:58.251 00:05:28 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:58.251 00:05:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:58.251 00:05:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:58.251 00:05:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:58.251 00:05:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:58.251 00:05:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:58.508 00:05:28 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:58.508 00:05:28 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:58.508 00:05:28 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:27:58.508 00:05:28 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:58.508 00:05:28 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:27:58.508 00:05:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.508 00:05:28 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:27:58.508 00:05:28 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:58.508 00:05:28 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:58.508 00:05:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:58.508 [2024-07-25 00:05:29.113685] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.TnT0nMfGDV': No such file or directory 00:27:58.508 [2024-07-25 00:05:29.113723] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:58.508 [2024-07-25 00:05:29.113755] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:58.508 [2024-07-25 00:05:29.113769] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:58.508 [2024-07-25 00:05:29.113782] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:58.508 request: 00:27:58.508 { 00:27:58.508 "name": "nvme0", 00:27:58.508 "trtype": "tcp", 00:27:58.508 "traddr": "127.0.0.1", 00:27:58.508 "adrfam": "ipv4", 00:27:58.508 "trsvcid": "4420", 00:27:58.508 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:58.508 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:58.508 "prchk_reftag": false, 00:27:58.508 "prchk_guard": false, 00:27:58.508 "hdgst": false, 00:27:58.508 "ddgst": false, 00:27:58.508 "psk": "key0", 00:27:58.508 "method": "bdev_nvme_attach_controller", 00:27:58.508 "req_id": 1 00:27:58.508 } 00:27:58.508 Got JSON-RPC error response 00:27:58.508 response: 00:27:58.508 { 00:27:58.508 "code": -19, 00:27:58.508 "message": "No such device" 00:27:58.508 } 00:27:58.767 00:05:29 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:27:58.767 00:05:29 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:58.767 00:05:29 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:58.767 00:05:29 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:58.767 00:05:29 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:58.767 00:05:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:59.025 00:05:29 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:59.025 00:05:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:59.025 00:05:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:59.025 00:05:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:59.025 00:05:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:59.025 00:05:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:59.026 00:05:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.UOfac0M5dV 00:27:59.026 00:05:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:59.026 00:05:29 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:59.026 00:05:29 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:59.026 00:05:29 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:59.026 00:05:29 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:59.026 00:05:29 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:59.026 00:05:29 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:59.026 00:05:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.UOfac0M5dV 00:27:59.026 00:05:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.UOfac0M5dV 00:27:59.026 00:05:29 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.UOfac0M5dV 00:27:59.026 00:05:29 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UOfac0M5dV 00:27:59.026 00:05:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UOfac0M5dV 00:27:59.284 00:05:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:59.284 00:05:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:59.541 nvme0n1 00:27:59.541 00:05:29 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:59.541 00:05:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:59.542 00:05:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:59.542 00:05:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:59.542 00:05:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:59.542 00:05:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:59.799 00:05:30 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:59.799 00:05:30 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:59.799 00:05:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:00.059 00:05:30 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:28:00.059 00:05:30 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:28:00.059 00:05:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:00.059 00:05:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:00.059 00:05:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:00.317 00:05:30 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:28:00.317 00:05:30 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:28:00.317 00:05:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:00.317 00:05:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:00.317 00:05:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:00.317 00:05:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:00.317 00:05:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:00.574 00:05:31 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:28:00.575 00:05:31 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:00.575 00:05:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:00.832 00:05:31 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:28:00.832 00:05:31 keyring_file -- keyring/file.sh@104 -- # jq length 00:28:00.832 00:05:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:01.090 00:05:31 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:28:01.090 00:05:31 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.UOfac0M5dV 00:28:01.090 00:05:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.UOfac0M5dV 00:28:01.348 00:05:31 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.0SNiJccc8T 00:28:01.348 00:05:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.0SNiJccc8T 00:28:01.606 00:05:32 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:01.606 00:05:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:01.864 nvme0n1 00:28:01.864 00:05:32 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:28:01.864 00:05:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:28:02.123 00:05:32 keyring_file -- keyring/file.sh@112 -- # config='{ 00:28:02.123 "subsystems": [ 00:28:02.123 { 00:28:02.123 "subsystem": "keyring", 00:28:02.123 "config": [ 00:28:02.123 { 00:28:02.123 "method": "keyring_file_add_key", 00:28:02.123 "params": { 00:28:02.123 "name": "key0", 00:28:02.123 "path": "/tmp/tmp.UOfac0M5dV" 00:28:02.123 } 00:28:02.123 }, 00:28:02.123 { 00:28:02.123 "method": "keyring_file_add_key", 00:28:02.123 "params": { 00:28:02.123 "name": "key1", 00:28:02.123 "path": "/tmp/tmp.0SNiJccc8T" 00:28:02.123 } 00:28:02.123 } 00:28:02.123 ] 00:28:02.123 }, 00:28:02.123 { 00:28:02.123 "subsystem": "iobuf", 00:28:02.123 "config": [ 00:28:02.123 { 00:28:02.123 "method": "iobuf_set_options", 00:28:02.123 "params": { 00:28:02.123 "small_pool_count": 8192, 00:28:02.123 "large_pool_count": 1024, 00:28:02.123 "small_bufsize": 8192, 00:28:02.123 "large_bufsize": 135168 00:28:02.123 } 00:28:02.123 } 00:28:02.123 ] 00:28:02.123 }, 00:28:02.123 { 00:28:02.123 "subsystem": "sock", 00:28:02.123 "config": [ 00:28:02.123 { 00:28:02.123 "method": "sock_set_default_impl", 00:28:02.123 "params": { 00:28:02.123 "impl_name": "posix" 00:28:02.123 } 00:28:02.123 }, 00:28:02.123 { 00:28:02.123 "method": "sock_impl_set_options", 00:28:02.123 "params": { 00:28:02.123 "impl_name": "ssl", 00:28:02.123 "recv_buf_size": 4096, 00:28:02.123 "send_buf_size": 4096, 00:28:02.123 "enable_recv_pipe": true, 00:28:02.123 "enable_quickack": false, 00:28:02.123 "enable_placement_id": 0, 00:28:02.123 "enable_zerocopy_send_server": true, 00:28:02.123 "enable_zerocopy_send_client": false, 00:28:02.123 "zerocopy_threshold": 0, 00:28:02.123 "tls_version": 0, 00:28:02.123 "enable_ktls": false 00:28:02.123 } 00:28:02.123 }, 00:28:02.123 { 00:28:02.123 "method": "sock_impl_set_options", 00:28:02.123 "params": { 00:28:02.123 "impl_name": "posix", 00:28:02.123 "recv_buf_size": 2097152, 00:28:02.123 "send_buf_size": 2097152, 00:28:02.123 "enable_recv_pipe": true, 00:28:02.123 "enable_quickack": false, 00:28:02.123 "enable_placement_id": 0, 00:28:02.123 "enable_zerocopy_send_server": true, 00:28:02.123 "enable_zerocopy_send_client": false, 00:28:02.123 "zerocopy_threshold": 0, 00:28:02.123 "tls_version": 0, 00:28:02.123 "enable_ktls": false 00:28:02.123 } 00:28:02.123 } 00:28:02.123 ] 00:28:02.123 }, 00:28:02.123 { 00:28:02.123 "subsystem": "vmd", 00:28:02.123 "config": [] 00:28:02.123 }, 00:28:02.123 { 00:28:02.123 "subsystem": "accel", 00:28:02.123 "config": [ 00:28:02.123 { 00:28:02.123 "method": "accel_set_options", 00:28:02.123 "params": { 00:28:02.124 "small_cache_size": 128, 00:28:02.124 "large_cache_size": 16, 00:28:02.124 "task_count": 2048, 00:28:02.124 "sequence_count": 2048, 00:28:02.124 "buf_count": 2048 00:28:02.124 } 00:28:02.124 } 00:28:02.124 ] 00:28:02.124 }, 00:28:02.124 { 00:28:02.124 "subsystem": "bdev", 00:28:02.124 "config": [ 00:28:02.124 { 00:28:02.124 "method": "bdev_set_options", 00:28:02.124 "params": { 00:28:02.124 "bdev_io_pool_size": 65535, 00:28:02.124 "bdev_io_cache_size": 256, 00:28:02.124 "bdev_auto_examine": true, 00:28:02.124 "iobuf_small_cache_size": 128, 00:28:02.124 "iobuf_large_cache_size": 16 00:28:02.124 } 00:28:02.124 }, 00:28:02.124 { 00:28:02.124 "method": "bdev_raid_set_options", 00:28:02.124 "params": { 00:28:02.124 "process_window_size_kb": 1024, 00:28:02.124 "process_max_bandwidth_mb_sec": 0 00:28:02.124 } 00:28:02.124 }, 00:28:02.124 { 00:28:02.124 "method": "bdev_iscsi_set_options", 00:28:02.124 "params": { 00:28:02.124 "timeout_sec": 30 00:28:02.124 } 00:28:02.124 }, 00:28:02.124 { 00:28:02.124 "method": "bdev_nvme_set_options", 00:28:02.124 "params": { 00:28:02.124 "action_on_timeout": "none", 00:28:02.124 "timeout_us": 0, 00:28:02.124 "timeout_admin_us": 0, 00:28:02.124 "keep_alive_timeout_ms": 10000, 00:28:02.124 "arbitration_burst": 0, 00:28:02.124 "low_priority_weight": 0, 00:28:02.124 "medium_priority_weight": 0, 00:28:02.124 "high_priority_weight": 0, 00:28:02.124 "nvme_adminq_poll_period_us": 10000, 00:28:02.124 "nvme_ioq_poll_period_us": 0, 00:28:02.124 "io_queue_requests": 512, 00:28:02.124 "delay_cmd_submit": true, 00:28:02.124 "transport_retry_count": 4, 00:28:02.124 "bdev_retry_count": 3, 00:28:02.124 "transport_ack_timeout": 0, 00:28:02.124 "ctrlr_loss_timeout_sec": 0, 00:28:02.124 "reconnect_delay_sec": 0, 00:28:02.124 "fast_io_fail_timeout_sec": 0, 00:28:02.124 "disable_auto_failback": false, 00:28:02.124 "generate_uuids": false, 00:28:02.124 "transport_tos": 0, 00:28:02.124 "nvme_error_stat": false, 00:28:02.124 "rdma_srq_size": 0, 00:28:02.124 "io_path_stat": false, 00:28:02.124 "allow_accel_sequence": false, 00:28:02.124 "rdma_max_cq_size": 0, 00:28:02.124 "rdma_cm_event_timeout_ms": 0, 00:28:02.124 "dhchap_digests": [ 00:28:02.124 "sha256", 00:28:02.124 "sha384", 00:28:02.124 "sha512" 00:28:02.124 ], 00:28:02.124 "dhchap_dhgroups": [ 00:28:02.124 "null", 00:28:02.124 "ffdhe2048", 00:28:02.124 "ffdhe3072", 00:28:02.124 "ffdhe4096", 00:28:02.124 "ffdhe6144", 00:28:02.124 "ffdhe8192" 00:28:02.124 ] 00:28:02.124 } 00:28:02.124 }, 00:28:02.124 { 00:28:02.124 "method": "bdev_nvme_attach_controller", 00:28:02.124 "params": { 00:28:02.124 "name": "nvme0", 00:28:02.124 "trtype": "TCP", 00:28:02.124 "adrfam": "IPv4", 00:28:02.124 "traddr": "127.0.0.1", 00:28:02.124 "trsvcid": "4420", 00:28:02.124 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:02.124 "prchk_reftag": false, 00:28:02.124 "prchk_guard": false, 00:28:02.124 "ctrlr_loss_timeout_sec": 0, 00:28:02.124 "reconnect_delay_sec": 0, 00:28:02.124 "fast_io_fail_timeout_sec": 0, 00:28:02.124 "psk": "key0", 00:28:02.124 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:02.124 "hdgst": false, 00:28:02.124 "ddgst": false 00:28:02.124 } 00:28:02.124 }, 00:28:02.124 { 00:28:02.124 "method": "bdev_nvme_set_hotplug", 00:28:02.124 "params": { 00:28:02.124 "period_us": 100000, 00:28:02.124 "enable": false 00:28:02.124 } 00:28:02.124 }, 00:28:02.124 { 00:28:02.124 "method": "bdev_wait_for_examine" 00:28:02.124 } 00:28:02.124 ] 00:28:02.124 }, 00:28:02.124 { 00:28:02.124 "subsystem": "nbd", 00:28:02.124 "config": [] 00:28:02.124 } 00:28:02.124 ] 00:28:02.124 }' 00:28:02.124 00:05:32 keyring_file -- keyring/file.sh@114 -- # killprocess 3508706 00:28:02.124 00:05:32 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3508706 ']' 00:28:02.124 00:05:32 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3508706 00:28:02.124 00:05:32 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:02.124 00:05:32 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:02.124 00:05:32 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3508706 00:28:02.124 00:05:32 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:02.124 00:05:32 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:02.124 00:05:32 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3508706' 00:28:02.124 killing process with pid 3508706 00:28:02.124 00:05:32 keyring_file -- common/autotest_common.sh@967 -- # kill 3508706 00:28:02.124 Received shutdown signal, test time was about 1.000000 seconds 00:28:02.124 00:28:02.124 Latency(us) 00:28:02.124 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:02.124 =================================================================================================================== 00:28:02.124 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:02.124 00:05:32 keyring_file -- common/autotest_common.sh@972 -- # wait 3508706 00:28:02.382 00:05:32 keyring_file -- keyring/file.sh@117 -- # bperfpid=3510174 00:28:02.382 00:05:32 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3510174 /var/tmp/bperf.sock 00:28:02.382 00:05:32 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 3510174 ']' 00:28:02.382 00:05:32 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:28:02.382 00:05:32 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:02.382 00:05:32 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:02.382 00:05:32 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:28:02.382 "subsystems": [ 00:28:02.382 { 00:28:02.382 "subsystem": "keyring", 00:28:02.382 "config": [ 00:28:02.382 { 00:28:02.382 "method": "keyring_file_add_key", 00:28:02.382 "params": { 00:28:02.382 "name": "key0", 00:28:02.382 "path": "/tmp/tmp.UOfac0M5dV" 00:28:02.382 } 00:28:02.382 }, 00:28:02.382 { 00:28:02.382 "method": "keyring_file_add_key", 00:28:02.382 "params": { 00:28:02.382 "name": "key1", 00:28:02.382 "path": "/tmp/tmp.0SNiJccc8T" 00:28:02.382 } 00:28:02.382 } 00:28:02.382 ] 00:28:02.382 }, 00:28:02.382 { 00:28:02.382 "subsystem": "iobuf", 00:28:02.382 "config": [ 00:28:02.382 { 00:28:02.382 "method": "iobuf_set_options", 00:28:02.382 "params": { 00:28:02.382 "small_pool_count": 8192, 00:28:02.382 "large_pool_count": 1024, 00:28:02.382 "small_bufsize": 8192, 00:28:02.382 "large_bufsize": 135168 00:28:02.382 } 00:28:02.382 } 00:28:02.382 ] 00:28:02.382 }, 00:28:02.382 { 00:28:02.382 "subsystem": "sock", 00:28:02.382 "config": [ 00:28:02.382 { 00:28:02.382 "method": "sock_set_default_impl", 00:28:02.382 "params": { 00:28:02.382 "impl_name": "posix" 00:28:02.382 } 00:28:02.382 }, 00:28:02.382 { 00:28:02.382 "method": "sock_impl_set_options", 00:28:02.382 "params": { 00:28:02.382 "impl_name": "ssl", 00:28:02.382 "recv_buf_size": 4096, 00:28:02.382 "send_buf_size": 4096, 00:28:02.383 "enable_recv_pipe": true, 00:28:02.383 "enable_quickack": false, 00:28:02.383 "enable_placement_id": 0, 00:28:02.383 "enable_zerocopy_send_server": true, 00:28:02.383 "enable_zerocopy_send_client": false, 00:28:02.383 "zerocopy_threshold": 0, 00:28:02.383 "tls_version": 0, 00:28:02.383 "enable_ktls": false 00:28:02.383 } 00:28:02.383 }, 00:28:02.383 { 00:28:02.383 "method": "sock_impl_set_options", 00:28:02.383 "params": { 00:28:02.383 "impl_name": "posix", 00:28:02.383 "recv_buf_size": 2097152, 00:28:02.383 "send_buf_size": 2097152, 00:28:02.383 "enable_recv_pipe": true, 00:28:02.383 "enable_quickack": false, 00:28:02.383 "enable_placement_id": 0, 00:28:02.383 "enable_zerocopy_send_server": true, 00:28:02.383 "enable_zerocopy_send_client": false, 00:28:02.383 "zerocopy_threshold": 0, 00:28:02.383 "tls_version": 0, 00:28:02.383 "enable_ktls": false 00:28:02.383 } 00:28:02.383 } 00:28:02.383 ] 00:28:02.383 }, 00:28:02.383 { 00:28:02.383 "subsystem": "vmd", 00:28:02.383 "config": [] 00:28:02.383 }, 00:28:02.383 { 00:28:02.383 "subsystem": "accel", 00:28:02.383 "config": [ 00:28:02.383 { 00:28:02.383 "method": "accel_set_options", 00:28:02.383 "params": { 00:28:02.383 "small_cache_size": 128, 00:28:02.383 "large_cache_size": 16, 00:28:02.383 "task_count": 2048, 00:28:02.383 "sequence_count": 2048, 00:28:02.383 "buf_count": 2048 00:28:02.383 } 00:28:02.383 } 00:28:02.383 ] 00:28:02.383 }, 00:28:02.383 { 00:28:02.383 "subsystem": "bdev", 00:28:02.383 "config": [ 00:28:02.383 { 00:28:02.383 "method": "bdev_set_options", 00:28:02.383 "params": { 00:28:02.383 "bdev_io_pool_size": 65535, 00:28:02.383 "bdev_io_cache_size": 256, 00:28:02.383 "bdev_auto_examine": true, 00:28:02.383 "iobuf_small_cache_size": 128, 00:28:02.383 "iobuf_large_cache_size": 16 00:28:02.383 } 00:28:02.383 }, 00:28:02.383 { 00:28:02.383 "method": "bdev_raid_set_options", 00:28:02.383 "params": { 00:28:02.383 "process_window_size_kb": 1024, 00:28:02.383 "process_max_bandwidth_mb_sec": 0 00:28:02.383 } 00:28:02.383 }, 00:28:02.383 { 00:28:02.383 "method": "bdev_iscsi_set_options", 00:28:02.383 "params": { 00:28:02.383 "timeout_sec": 30 00:28:02.383 } 00:28:02.383 }, 00:28:02.383 { 00:28:02.383 "method": "bdev_nvme_set_options", 00:28:02.383 "params": { 00:28:02.383 "action_on_timeout": "none", 00:28:02.383 "timeout_us": 0, 00:28:02.383 "timeout_admin_us": 0, 00:28:02.383 "keep_alive_timeout_ms": 10000, 00:28:02.383 "arbitration_burst": 0, 00:28:02.383 "low_priority_weight": 0, 00:28:02.383 "medium_priority_weight": 0, 00:28:02.383 "high_priority_weight": 0, 00:28:02.383 "nvme_adminq_poll_period_us": 10000, 00:28:02.383 "nvme_ioq_poll_period_us": 0, 00:28:02.383 "io_queue_requests": 512, 00:28:02.383 "delay_cmd_submit": true, 00:28:02.383 "transport_retry_count": 4, 00:28:02.383 "bdev_retry_count": 3, 00:28:02.383 "transport_ack_timeout": 0, 00:28:02.383 "ctrlr_loss_timeout_sec": 0, 00:28:02.383 "reconnect_delay_sec": 0, 00:28:02.383 "fast_io_fail_timeout_sec": 0, 00:28:02.383 "disable_auto_failback": false, 00:28:02.383 "generate_uuids": false, 00:28:02.383 "transport_tos": 0, 00:28:02.383 "nvme_error_stat": false, 00:28:02.383 "rdma_srq_size": 0, 00:28:02.383 "io_path_stat": false, 00:28:02.383 "allow_accel_sequence": false, 00:28:02.383 "rdma_max_cq_size": 0, 00:28:02.383 "rdma_cm_event_timeout_ms": 0, 00:28:02.383 "dhchap_digests": [ 00:28:02.383 "sha256", 00:28:02.383 "sha384", 00:28:02.383 "sha512" 00:28:02.383 ], 00:28:02.383 "dhchap_dhgroups": [ 00:28:02.383 "null", 00:28:02.383 "ffdhe2048", 00:28:02.383 "ffdhe3072", 00:28:02.383 "ffdhe4096", 00:28:02.383 "ffdhe6144", 00:28:02.383 "ffdhe8192" 00:28:02.383 ] 00:28:02.383 } 00:28:02.383 }, 00:28:02.383 { 00:28:02.383 "method": "bdev_nvme_attach_controller", 00:28:02.383 "params": { 00:28:02.383 "name": "nvme0", 00:28:02.383 "trtype": "TCP", 00:28:02.383 "adrfam": "IPv4", 00:28:02.383 "traddr": "127.0.0.1", 00:28:02.383 "trsvcid": "4420", 00:28:02.383 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:02.383 "prchk_reftag": false, 00:28:02.383 "prchk_guard": false, 00:28:02.383 "ctrlr_loss_timeout_sec": 0, 00:28:02.383 "reconnect_delay_sec": 0, 00:28:02.383 "fast_io_fail_timeout_sec": 0, 00:28:02.383 "psk": "key0", 00:28:02.383 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:02.383 "hdgst": false, 00:28:02.383 "ddgst": false 00:28:02.383 } 00:28:02.383 }, 00:28:02.383 { 00:28:02.383 "method": "bdev_nvme_set_hotplug", 00:28:02.383 "params": { 00:28:02.383 "period_us": 100000, 00:28:02.383 "enable": false 00:28:02.383 } 00:28:02.383 }, 00:28:02.383 { 00:28:02.383 "method": "bdev_wait_for_examine" 00:28:02.383 } 00:28:02.383 ] 00:28:02.383 }, 00:28:02.383 { 00:28:02.383 "subsystem": "nbd", 00:28:02.383 "config": [] 00:28:02.383 } 00:28:02.383 ] 00:28:02.383 }' 00:28:02.383 00:05:32 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:02.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:02.383 00:05:32 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:02.383 00:05:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:02.383 [2024-07-25 00:05:32.959727] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:28:02.383 [2024-07-25 00:05:32.959803] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3510174 ] 00:28:02.383 EAL: No free 2048 kB hugepages reported on node 1 00:28:02.641 [2024-07-25 00:05:33.021167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:02.641 [2024-07-25 00:05:33.135416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:02.903 [2024-07-25 00:05:33.329725] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:03.497 00:05:33 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:03.497 00:05:33 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:28:03.497 00:05:33 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:28:03.497 00:05:33 keyring_file -- keyring/file.sh@120 -- # jq length 00:28:03.497 00:05:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:03.755 00:05:34 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:28:03.755 00:05:34 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:28:03.755 00:05:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:03.755 00:05:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:03.755 00:05:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:03.755 00:05:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:03.755 00:05:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:04.013 00:05:34 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:28:04.013 00:05:34 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:28:04.013 00:05:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:04.013 00:05:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:04.013 00:05:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:04.013 00:05:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:04.013 00:05:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:04.272 00:05:34 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:28:04.272 00:05:34 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:28:04.272 00:05:34 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:28:04.272 00:05:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:28:04.533 00:05:34 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:28:04.533 00:05:34 keyring_file -- keyring/file.sh@1 -- # cleanup 00:28:04.533 00:05:34 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.UOfac0M5dV /tmp/tmp.0SNiJccc8T 00:28:04.533 00:05:34 keyring_file -- keyring/file.sh@20 -- # killprocess 3510174 00:28:04.533 00:05:34 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3510174 ']' 00:28:04.533 00:05:34 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3510174 00:28:04.533 00:05:34 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:04.533 00:05:34 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:04.533 00:05:34 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3510174 00:28:04.533 00:05:34 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:04.533 00:05:34 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:04.533 00:05:34 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3510174' 00:28:04.533 killing process with pid 3510174 00:28:04.533 00:05:34 keyring_file -- common/autotest_common.sh@967 -- # kill 3510174 00:28:04.533 Received shutdown signal, test time was about 1.000000 seconds 00:28:04.533 00:28:04.533 Latency(us) 00:28:04.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.533 =================================================================================================================== 00:28:04.533 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:28:04.533 00:05:34 keyring_file -- common/autotest_common.sh@972 -- # wait 3510174 00:28:04.792 00:05:35 keyring_file -- keyring/file.sh@21 -- # killprocess 3508571 00:28:04.793 00:05:35 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 3508571 ']' 00:28:04.793 00:05:35 keyring_file -- common/autotest_common.sh@952 -- # kill -0 3508571 00:28:04.793 00:05:35 keyring_file -- common/autotest_common.sh@953 -- # uname 00:28:04.793 00:05:35 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:04.793 00:05:35 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3508571 00:28:04.793 00:05:35 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:04.793 00:05:35 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:04.793 00:05:35 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3508571' 00:28:04.793 killing process with pid 3508571 00:28:04.793 00:05:35 keyring_file -- common/autotest_common.sh@967 -- # kill 3508571 00:28:04.793 [2024-07-25 00:05:35.227499] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:04.793 00:05:35 keyring_file -- common/autotest_common.sh@972 -- # wait 3508571 00:28:05.359 00:28:05.360 real 0m15.445s 00:28:05.360 user 0m37.293s 00:28:05.360 sys 0m3.483s 00:28:05.360 00:05:35 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:05.360 00:05:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:05.360 ************************************ 00:28:05.360 END TEST keyring_file 00:28:05.360 ************************************ 00:28:05.360 00:05:35 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:28:05.360 00:05:35 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:05.360 00:05:35 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:05.360 00:05:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:05.360 00:05:35 -- common/autotest_common.sh@10 -- # set +x 00:28:05.360 ************************************ 00:28:05.360 START TEST keyring_linux 00:28:05.360 ************************************ 00:28:05.360 00:05:35 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:28:05.360 * Looking for test storage... 00:28:05.360 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:28:05.360 00:05:35 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:05.360 00:05:35 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.360 00:05:35 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.360 00:05:35 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.360 00:05:35 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.360 00:05:35 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.360 00:05:35 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.360 00:05:35 keyring_linux -- paths/export.sh@5 -- # export PATH 00:28:05.360 00:05:35 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:05.360 00:05:35 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:05.360 00:05:35 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:05.360 00:05:35 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:28:05.360 00:05:35 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:28:05.360 00:05:35 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:28:05.360 00:05:35 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:28:05.360 /tmp/:spdk-test:key0 00:28:05.360 00:05:35 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:28:05.360 00:05:35 keyring_linux -- nvmf/common.sh@705 -- # python - 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:28:05.360 00:05:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:28:05.360 /tmp/:spdk-test:key1 00:28:05.360 00:05:35 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3510538 00:28:05.360 00:05:35 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:28:05.360 00:05:35 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3510538 00:28:05.360 00:05:35 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3510538 ']' 00:28:05.360 00:05:35 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:05.360 00:05:35 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:05.360 00:05:35 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:05.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:05.360 00:05:35 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:05.360 00:05:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:05.360 [2024-07-25 00:05:35.921320] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:28:05.360 [2024-07-25 00:05:35.921414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3510538 ] 00:28:05.360 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.619 [2024-07-25 00:05:35.982459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.619 [2024-07-25 00:05:36.090676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.877 00:05:36 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:05.877 00:05:36 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:05.877 00:05:36 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:28:05.877 00:05:36 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.877 00:05:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:05.877 [2024-07-25 00:05:36.348655] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.877 null0 00:28:05.877 [2024-07-25 00:05:36.380701] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:05.878 [2024-07-25 00:05:36.381155] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:05.878 00:05:36 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.878 00:05:36 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:28:05.878 676041529 00:28:05.878 00:05:36 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:28:05.878 547310691 00:28:05.878 00:05:36 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3510673 00:28:05.878 00:05:36 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:28:05.878 00:05:36 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3510673 /var/tmp/bperf.sock 00:28:05.878 00:05:36 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 3510673 ']' 00:28:05.878 00:05:36 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:05.878 00:05:36 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:05.878 00:05:36 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:05.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:05.878 00:05:36 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:05.878 00:05:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:05.878 [2024-07-25 00:05:36.446039] Starting SPDK v24.09-pre git sha1 a1abc21f8 / DPDK 24.03.0 initialization... 00:28:05.878 [2024-07-25 00:05:36.446117] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3510673 ] 00:28:05.878 EAL: No free 2048 kB hugepages reported on node 1 00:28:06.136 [2024-07-25 00:05:36.507142] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.136 [2024-07-25 00:05:36.623201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.136 00:05:36 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:06.136 00:05:36 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:28:06.136 00:05:36 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:28:06.136 00:05:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:28:06.394 00:05:36 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:28:06.394 00:05:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:06.652 00:05:37 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:06.652 00:05:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:28:06.911 [2024-07-25 00:05:37.470055] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:07.170 nvme0n1 00:28:07.170 00:05:37 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:28:07.170 00:05:37 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:28:07.170 00:05:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:07.170 00:05:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:07.170 00:05:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:07.170 00:05:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:07.428 00:05:37 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:28:07.428 00:05:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:07.428 00:05:37 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:28:07.428 00:05:37 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:28:07.428 00:05:37 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:07.428 00:05:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:07.428 00:05:37 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:28:07.686 00:05:38 keyring_linux -- keyring/linux.sh@25 -- # sn=676041529 00:28:07.686 00:05:38 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:28:07.686 00:05:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:07.686 00:05:38 keyring_linux -- keyring/linux.sh@26 -- # [[ 676041529 == \6\7\6\0\4\1\5\2\9 ]] 00:28:07.686 00:05:38 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 676041529 00:28:07.686 00:05:38 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:28:07.686 00:05:38 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:07.686 Running I/O for 1 seconds... 00:28:08.622 00:28:08.622 Latency(us) 00:28:08.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:08.622 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:08.622 nvme0n1 : 1.02 5578.62 21.79 0.00 0.00 22767.14 10194.49 34175.81 00:28:08.622 =================================================================================================================== 00:28:08.622 Total : 5578.62 21.79 0.00 0.00 22767.14 10194.49 34175.81 00:28:08.622 0 00:28:08.622 00:05:39 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:08.622 00:05:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:08.880 00:05:39 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:28:08.880 00:05:39 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:28:08.880 00:05:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:28:08.880 00:05:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:28:08.880 00:05:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:28:08.880 00:05:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:09.139 00:05:39 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:28:09.139 00:05:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:28:09.139 00:05:39 keyring_linux -- keyring/linux.sh@23 -- # return 00:28:09.139 00:05:39 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:09.139 00:05:39 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:28:09.139 00:05:39 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:09.139 00:05:39 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:28:09.139 00:05:39 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.139 00:05:39 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:28:09.139 00:05:39 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:09.139 00:05:39 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:09.139 00:05:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:28:09.397 [2024-07-25 00:05:39.953763] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:09.397 [2024-07-25 00:05:39.954655] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d02890 (107): Transport endpoint is not connected 00:28:09.397 [2024-07-25 00:05:39.955646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d02890 (9): Bad file descriptor 00:28:09.397 [2024-07-25 00:05:39.956643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:09.397 [2024-07-25 00:05:39.956666] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:09.397 [2024-07-25 00:05:39.956681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:09.397 request: 00:28:09.397 { 00:28:09.397 "name": "nvme0", 00:28:09.397 "trtype": "tcp", 00:28:09.397 "traddr": "127.0.0.1", 00:28:09.397 "adrfam": "ipv4", 00:28:09.397 "trsvcid": "4420", 00:28:09.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:09.397 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:09.397 "prchk_reftag": false, 00:28:09.397 "prchk_guard": false, 00:28:09.397 "hdgst": false, 00:28:09.397 "ddgst": false, 00:28:09.397 "psk": ":spdk-test:key1", 00:28:09.397 "method": "bdev_nvme_attach_controller", 00:28:09.397 "req_id": 1 00:28:09.397 } 00:28:09.397 Got JSON-RPC error response 00:28:09.397 response: 00:28:09.397 { 00:28:09.397 "code": -5, 00:28:09.397 "message": "Input/output error" 00:28:09.397 } 00:28:09.397 00:05:39 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:28:09.397 00:05:39 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:09.397 00:05:39 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:09.397 00:05:39 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@33 -- # sn=676041529 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 676041529 00:28:09.397 1 links removed 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@33 -- # sn=547310691 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 547310691 00:28:09.397 1 links removed 00:28:09.397 00:05:39 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3510673 00:28:09.397 00:05:39 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3510673 ']' 00:28:09.397 00:05:39 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3510673 00:28:09.397 00:05:39 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:09.397 00:05:39 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:09.397 00:05:39 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3510673 00:28:09.397 00:05:40 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:09.397 00:05:40 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:09.397 00:05:40 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3510673' 00:28:09.397 killing process with pid 3510673 00:28:09.397 00:05:40 keyring_linux -- common/autotest_common.sh@967 -- # kill 3510673 00:28:09.397 Received shutdown signal, test time was about 1.000000 seconds 00:28:09.397 00:28:09.397 Latency(us) 00:28:09.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.398 =================================================================================================================== 00:28:09.398 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.398 00:05:40 keyring_linux -- common/autotest_common.sh@972 -- # wait 3510673 00:28:09.656 00:05:40 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3510538 00:28:09.656 00:05:40 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 3510538 ']' 00:28:09.656 00:05:40 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 3510538 00:28:09.656 00:05:40 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:28:09.656 00:05:40 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:09.656 00:05:40 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 3510538 00:28:09.913 00:05:40 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:09.913 00:05:40 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:09.913 00:05:40 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 3510538' 00:28:09.913 killing process with pid 3510538 00:28:09.913 00:05:40 keyring_linux -- common/autotest_common.sh@967 -- # kill 3510538 00:28:09.913 00:05:40 keyring_linux -- common/autotest_common.sh@972 -- # wait 3510538 00:28:10.172 00:28:10.172 real 0m4.969s 00:28:10.172 user 0m9.359s 00:28:10.172 sys 0m1.560s 00:28:10.172 00:05:40 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:10.172 00:05:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:28:10.172 ************************************ 00:28:10.172 END TEST keyring_linux 00:28:10.172 ************************************ 00:28:10.172 00:05:40 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:28:10.172 00:05:40 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:28:10.172 00:05:40 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:28:10.172 00:05:40 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:28:10.172 00:05:40 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:28:10.172 00:05:40 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:28:10.172 00:05:40 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:28:10.172 00:05:40 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:28:10.172 00:05:40 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:28:10.172 00:05:40 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:28:10.172 00:05:40 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:28:10.172 00:05:40 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:28:10.172 00:05:40 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:28:10.172 00:05:40 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:28:10.172 00:05:40 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:28:10.172 00:05:40 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:28:10.172 00:05:40 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:28:10.172 00:05:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:10.172 00:05:40 -- common/autotest_common.sh@10 -- # set +x 00:28:10.172 00:05:40 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:28:10.172 00:05:40 -- common/autotest_common.sh@1390 -- # local autotest_es=0 00:28:10.172 00:05:40 -- common/autotest_common.sh@1391 -- # xtrace_disable 00:28:10.172 00:05:40 -- common/autotest_common.sh@10 -- # set +x 00:28:12.074 INFO: APP EXITING 00:28:12.074 INFO: killing all VMs 00:28:12.074 INFO: killing vhost app 00:28:12.074 INFO: EXIT DONE 00:28:13.009 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:28:13.009 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:28:13.009 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:28:13.009 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:28:13.267 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:28:13.267 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:28:13.267 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:28:13.267 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:28:13.267 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:28:13.267 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:28:13.267 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:28:13.267 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:28:13.267 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:28:13.267 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:28:13.267 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:28:13.267 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:28:13.267 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:28:14.638 Cleaning 00:28:14.638 Removing: /var/run/dpdk/spdk0/config 00:28:14.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:14.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:14.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:14.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:14.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:28:14.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:28:14.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:28:14.638 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:28:14.638 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:14.638 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:14.638 Removing: /var/run/dpdk/spdk1/config 00:28:14.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:28:14.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:28:14.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:28:14.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:28:14.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:28:14.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:28:14.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:28:14.638 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:28:14.638 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:28:14.638 Removing: /var/run/dpdk/spdk1/hugepage_info 00:28:14.638 Removing: /var/run/dpdk/spdk1/mp_socket 00:28:14.638 Removing: /var/run/dpdk/spdk2/config 00:28:14.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:28:14.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:28:14.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:28:14.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:28:14.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:28:14.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:28:14.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:28:14.638 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:28:14.638 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:28:14.638 Removing: /var/run/dpdk/spdk2/hugepage_info 00:28:14.638 Removing: /var/run/dpdk/spdk3/config 00:28:14.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:28:14.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:28:14.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:28:14.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:28:14.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:28:14.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:28:14.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:28:14.638 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:28:14.638 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:28:14.638 Removing: /var/run/dpdk/spdk3/hugepage_info 00:28:14.638 Removing: /var/run/dpdk/spdk4/config 00:28:14.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:28:14.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:28:14.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:28:14.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:28:14.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:28:14.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:28:14.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:28:14.638 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:28:14.638 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:28:14.638 Removing: /var/run/dpdk/spdk4/hugepage_info 00:28:14.638 Removing: /dev/shm/bdev_svc_trace.1 00:28:14.638 Removing: /dev/shm/nvmf_trace.0 00:28:14.638 Removing: /dev/shm/spdk_tgt_trace.pid3249619 00:28:14.638 Removing: /var/run/dpdk/spdk0 00:28:14.638 Removing: /var/run/dpdk/spdk1 00:28:14.638 Removing: /var/run/dpdk/spdk2 00:28:14.638 Removing: /var/run/dpdk/spdk3 00:28:14.638 Removing: /var/run/dpdk/spdk4 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3247948 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3248683 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3249619 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3250067 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3250755 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3250984 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3251674 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3251750 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3251990 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3253199 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3254263 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3254523 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3254725 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3255044 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3255246 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3255620 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3256151 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3256371 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3256561 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3258974 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3259197 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3259371 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3259496 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3259880 00:28:14.638 Removing: /var/run/dpdk/spdk_pid3259945 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3260371 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3260385 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3260669 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3260685 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3260855 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3260973 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3261342 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3261501 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3261813 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3261985 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3262011 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3262196 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3262352 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3262513 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3262782 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3262946 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3263105 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3263376 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3263540 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3263696 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3263971 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3264127 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3264307 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3264560 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3264724 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3264978 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3265154 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3265309 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3265591 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3265746 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3265910 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3266182 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3266253 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3266500 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3268730 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3271315 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3278280 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3278803 00:28:14.896 Removing: /var/run/dpdk/spdk_pid3281195 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3281472 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3284107 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3288435 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3290546 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3296908 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3302118 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3303435 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3304109 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3314466 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3316863 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3343325 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3346621 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3350560 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3354396 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3354402 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3355053 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3355595 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3356250 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3356646 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3356659 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3356910 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3356932 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3356956 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3357612 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3358259 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3358860 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3359328 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3359411 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3359576 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3360962 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3361782 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3367104 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3392351 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3395133 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3396312 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3397627 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3397679 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3397787 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3397926 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3398356 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3399684 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3400420 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3400735 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3402346 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3402771 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3403332 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3405725 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3411621 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3415015 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3418779 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3419863 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3420956 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3423536 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3426021 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3430105 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3430216 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3433000 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3433139 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3433273 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3433541 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3433552 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3436304 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3436639 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3439301 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3441270 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3444689 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3448015 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3454980 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3459338 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3459341 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3471680 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3472215 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3472755 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3473286 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3473872 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3474281 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3474694 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3475098 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3477592 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3477813 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3481620 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3481702 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3483342 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3488963 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3488968 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3491877 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3493281 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3494693 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3495547 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3497067 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3497830 00:28:14.897 Removing: /var/run/dpdk/spdk_pid3503227 00:28:15.155 Removing: /var/run/dpdk/spdk_pid3503619 00:28:15.155 Removing: /var/run/dpdk/spdk_pid3504013 00:28:15.155 Removing: /var/run/dpdk/spdk_pid3505447 00:28:15.155 Removing: /var/run/dpdk/spdk_pid3505844 00:28:15.155 Removing: /var/run/dpdk/spdk_pid3506229 00:28:15.155 Removing: /var/run/dpdk/spdk_pid3508571 00:28:15.155 Removing: /var/run/dpdk/spdk_pid3508706 00:28:15.155 Removing: /var/run/dpdk/spdk_pid3510174 00:28:15.155 Removing: /var/run/dpdk/spdk_pid3510538 00:28:15.155 Removing: /var/run/dpdk/spdk_pid3510673 00:28:15.155 Clean 00:28:15.155 00:05:45 -- common/autotest_common.sh@1449 -- # return 0 00:28:15.155 00:05:45 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:28:15.155 00:05:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:15.155 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:28:15.155 00:05:45 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:28:15.155 00:05:45 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:15.155 00:05:45 -- common/autotest_common.sh@10 -- # set +x 00:28:15.155 00:05:45 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:15.155 00:05:45 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:28:15.155 00:05:45 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:28:15.155 00:05:45 -- spdk/autotest.sh@391 -- # hash lcov 00:28:15.155 00:05:45 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:28:15.155 00:05:45 -- spdk/autotest.sh@393 -- # hostname 00:28:15.155 00:05:45 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:28:15.413 geninfo: WARNING: invalid characters removed from testname! 00:28:47.511 00:06:13 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:48.446 00:06:18 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:52.632 00:06:22 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:55.911 00:06:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:00.096 00:06:30 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:04.281 00:06:34 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:29:07.564 00:06:37 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:07.564 00:06:37 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.564 00:06:37 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:07.564 00:06:37 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.564 00:06:37 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.564 00:06:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.564 00:06:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.564 00:06:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.565 00:06:37 -- paths/export.sh@5 -- $ export PATH 00:29:07.565 00:06:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.565 00:06:37 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:29:07.565 00:06:37 -- common/autobuild_common.sh@447 -- $ date +%s 00:29:07.565 00:06:37 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721858797.XXXXXX 00:29:07.565 00:06:37 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721858797.xyXdhH 00:29:07.565 00:06:37 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:29:07.565 00:06:37 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:29:07.565 00:06:37 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:29:07.565 00:06:37 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:07.565 00:06:37 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:07.565 00:06:37 -- common/autobuild_common.sh@463 -- $ get_config_params 00:29:07.565 00:06:37 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:29:07.565 00:06:37 -- common/autotest_common.sh@10 -- $ set +x 00:29:07.565 00:06:37 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:29:07.565 00:06:37 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:29:07.565 00:06:37 -- pm/common@17 -- $ local monitor 00:29:07.565 00:06:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:07.565 00:06:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:07.565 00:06:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:07.565 00:06:37 -- pm/common@21 -- $ date +%s 00:29:07.565 00:06:37 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:07.565 00:06:37 -- pm/common@21 -- $ date +%s 00:29:07.565 00:06:37 -- pm/common@25 -- $ sleep 1 00:29:07.565 00:06:37 -- pm/common@21 -- $ date +%s 00:29:07.565 00:06:37 -- pm/common@21 -- $ date +%s 00:29:07.565 00:06:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721858797 00:29:07.565 00:06:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721858797 00:29:07.565 00:06:37 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721858797 00:29:07.565 00:06:37 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721858797 00:29:07.565 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721858797_collect-vmstat.pm.log 00:29:07.565 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721858797_collect-cpu-load.pm.log 00:29:07.565 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721858797_collect-cpu-temp.pm.log 00:29:07.565 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721858797_collect-bmc-pm.bmc.pm.log 00:29:08.499 00:06:38 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:29:08.499 00:06:38 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:29:08.499 00:06:38 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:08.499 00:06:38 -- spdk/autopackage.sh@13 -- $ [[ '' -eq 1 ]] 00:29:08.499 00:06:38 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:08.499 00:06:38 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:08.499 00:06:38 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:08.499 00:06:38 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:08.499 00:06:38 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:08.499 00:06:39 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:08.499 00:06:39 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:08.499 00:06:39 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:08.499 00:06:39 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:08.499 00:06:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:08.499 00:06:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:29:08.499 00:06:39 -- pm/common@44 -- $ pid=3521003 00:29:08.499 00:06:39 -- pm/common@50 -- $ kill -TERM 3521003 00:29:08.499 00:06:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:08.499 00:06:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:29:08.499 00:06:39 -- pm/common@44 -- $ pid=3521005 00:29:08.499 00:06:39 -- pm/common@50 -- $ kill -TERM 3521005 00:29:08.499 00:06:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:08.499 00:06:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:29:08.499 00:06:39 -- pm/common@44 -- $ pid=3521007 00:29:08.499 00:06:39 -- pm/common@50 -- $ kill -TERM 3521007 00:29:08.499 00:06:39 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:08.499 00:06:39 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:29:08.499 00:06:39 -- pm/common@44 -- $ pid=3521036 00:29:08.499 00:06:39 -- pm/common@50 -- $ sudo -E kill -TERM 3521036 00:29:08.499 + [[ -n 3164244 ]] 00:29:08.499 + sudo kill 3164244 00:29:08.509 [Pipeline] } 00:29:08.523 [Pipeline] // stage 00:29:08.528 [Pipeline] } 00:29:08.540 [Pipeline] // timeout 00:29:08.545 [Pipeline] } 00:29:08.559 [Pipeline] // catchError 00:29:08.563 [Pipeline] } 00:29:08.579 [Pipeline] // wrap 00:29:08.584 [Pipeline] } 00:29:08.597 [Pipeline] // catchError 00:29:08.605 [Pipeline] stage 00:29:08.607 [Pipeline] { (Epilogue) 00:29:08.620 [Pipeline] catchError 00:29:08.622 [Pipeline] { 00:29:08.636 [Pipeline] echo 00:29:08.637 Cleanup processes 00:29:08.643 [Pipeline] sh 00:29:08.930 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:08.930 3521139 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:29:08.930 3521269 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:08.945 [Pipeline] sh 00:29:09.230 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:29:09.230 ++ grep -v 'sudo pgrep' 00:29:09.230 ++ awk '{print $1}' 00:29:09.230 + sudo kill -9 3521139 00:29:09.242 [Pipeline] sh 00:29:09.554 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:21.763 [Pipeline] sh 00:29:22.050 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:22.050 Artifacts sizes are good 00:29:22.066 [Pipeline] archiveArtifacts 00:29:22.074 Archiving artifacts 00:29:22.284 [Pipeline] sh 00:29:22.572 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:29:22.588 [Pipeline] cleanWs 00:29:22.599 [WS-CLEANUP] Deleting project workspace... 00:29:22.599 [WS-CLEANUP] Deferred wipeout is used... 00:29:22.607 [WS-CLEANUP] done 00:29:22.609 [Pipeline] } 00:29:22.630 [Pipeline] // catchError 00:29:22.643 [Pipeline] sh 00:29:22.930 + logger -p user.info -t JENKINS-CI 00:29:22.958 [Pipeline] } 00:29:22.967 [Pipeline] // stage 00:29:22.970 [Pipeline] } 00:29:22.981 [Pipeline] // node 00:29:22.984 [Pipeline] End of Pipeline 00:29:23.010 Finished: SUCCESS